It is something like the philosopher's stone. A single equation for intelligence. A sort of E=mc^{2 }that would put intelligence, and more particularly artificial intelligence, on a sound theoretical footing. But could it be as simple as this TED talk video suggests?
Alex WissnerGross thinks that he has worked out the physical basis for intelligence. He has devised an equation that in his opinion tells every system how to behave in a way that we would label intelligent. His equation is easy enough to write down:
The S is related to the entropy of the system and the T is a notional "temperature". If you know math then the triangle symbol will be familiar to you as the gradient and the equation roughly says that there is force in the direction of increasing entropy. This force  the causal entropic force  is what is equated to intelligence.
If you have encountered the ideas of thermodynamics or information theory then you may already know that entropy and information are intertwined in ways that are very complicated once you go beyond a general ideas of order and disorder. So it isn't really surprising that entropy would figure in any theory of intelligence. This isn't the first time that "thinkers" have linked the two together, but mostly the connection has been stated in vague, almost philosophical, ways.
Now we have an exact equation.
If you read the paper that explains how it all works then you will realize that the exact equation embodies a philosophical principle. This states that intelligence is behaviour that is motivated by the need to keep as many options open. It attempts to reach states that maximise the freedom to act. That is, if you build a system that moves its state in the direction of the causal entropic force the system will move towards a state that maximizes the causal entropy.
Of course, what is hidden in all of this is the exact definition of causal entropy  because this isn't the same as the usual entropy of a system.
Causal entropy is a path integral of the probability of a system evolving from it current state to new states.
If you examine the formulation more carefully then you will notice that in fact it is the information content of a path leading from the current state to a future state that is integrated over all paths. The causal entropic force causes the system to evolve towards the state that maximizes this integral, i.e. a state with lots of highly probable future states.
The video explains some of this and provides examples of the principle in action where it is claimed to replicate a number of "humanlike" intelligent behaviours including cooperation and tool use.
Impressed?
There are a lot of questions you should be asking. The examples given are of contrived systems not natural ones. No cart spontaneously balances a pendulum and particles do not move to the center of a box and so on. There is no evidence that maximizing causal entropy has any correspondence to real physical systems.
In addition all of the examples are of continuous dynamics overinterpreted into something more meaningful to the viewer. An ape with a tool trying to get some food isn't anything like a few disks bouncing around a box. The ape is motivated by a huge range of things and not just an innate desire to maximize its future freedom to act. True, eating some food does give the ape an increased freedom to act, but one grape or grub say is hardy going to setup a gradient down which intelligence flows.
It could be that some measure connected to entropy is related to intelligent behaviour. Recent work has suggested, for example, a connection between entropy, reversibility and why things spontaneously organize into reproductive units when there is abundant energy. This equation of life seems to have a much better chance of being true than the equation of intelligence.
The trouble with AI is that it is too easy to make something trivial look sophisticated if you put the right spin on it  and the right spin usually turns out to be physics.
Google Frees Up More Patents 28/08/2014
Google has added 152 patents to the list of ones you can copy in your own opensource version and are covered by its OPN Pledge, whereby Google won't sue unless it is sued first.

Talking About Language 27/08/2014
An analysis of data from programming language subreddits reveals some insights into how programmers feel about the languages they use and the overlap between languages.
 More News 
