|Forget The Turing Test It's The MacGyver Test That Matters|
|Written by Mike James|
|Sunday, 30 April 2017|
There are lots of generalized Turing tests which all work on the general principle that if you can't tell the artificial from the natural then they are for all intents and purposes the same. Now we have the MacGyver test which is aimed at seeing how much devious problem-solving intelligence a system has.
Vasanth Sarathy and Matthias Scheutz at Tufts University aren't content with the Turing test. They think that how AIs square off against TV's MacGyver is the real test.
Consider a situation when your only suit is covered in lint and you do not own a lint remover. Being resourceful, you reason that a roll of duct tape might be a good substitute. You then solve the problem of lint removal by peeling a full turn’s worth of tape and re-attaching it backwards onto the roll to expose the sticky side all around the roll. By rolling it over your suit, you can now pick up all the lint. This type of everyday creativity and resourcefulness is a hallmark of human intelligence and best embodied in the 1980s television series MacGyver which featured a clever secret service agent who used common objects around him like paper clips and rubber bands in inventive ways to escape difficult life-or-death situations.
Talking of the 1980s TV series, I have say that the recent revival just hasn't got the same approach. Far too much electronic wizardry that just solves the problem without really letting you see how the problem was really solved. We value the original sort of raw intelligence and creativity that the first series embodied and if we want AI to do clever things it has to learn to think outside the box as well.
To test how advanced AI is in this respect the researchers suggest the MacGyver Test. or MT.
The proposed evaluation framework, based on the idea of MacGyver-esque creativity, is intended to answer the question whether embodied machines can generate, execute and learn strategies for identifying and solving seemingly unsolvable real-world problems. The idea is to present an agent with a problem that is unsolvable with the agent’s initial knowledge and observing the agent’s problem solving processes to estimate the probability that the agent is being creative: if the agent can think outside of its current context, take some exploratory actions, and incorporate relevant environmental cues and learned knowledge to make the problem tractable (or at least computable) then the agent has the general ability to solve open-world problems more effectively.
The MT is defined as a problem expressed in a standard planning language that has a goal state that is currently unreachable - a MacGyver problem. What this means is that the problem is solvable by a planning algorithm which is generally regarded as not "intelligent".
With fears that AI might in the future make humans their pet, or make us go extinct, luminaries such as Elon Musk and Stephen Hawking are currently espousing a popular rejection is that AI isn't anywhere near MacGyver. They argue that there are currently no general purpose AI systems, just ones that solve specific problems.
Perhaps the MacGyver Test is further off being passed than the Turing Test.
Perhaps, given the real worries about AI, we should institute the Terminator Test - and you know what that would involve - and we all hope AI doesn't pass this one for even longer, preferably for ever.
or email your comment to: email@example.com
|Last Updated ( Sunday, 30 April 2017 )|