|Can DeepMind's Alpha Code Outperform Human Coders?
|Written by Sue Gee
|Thursday, 03 February 2022
DeepMind has developed an AI capable of solving some competitive programming problems. When tried out on recent CodeForces contests AlphaCode achieved a rank within the top 54% of participants.
DeepMind is the Alphabet subsidiary founded in 2010 with the goal of creating AGI, artificial general intelligence. It is now almost six years since AlphaGo defeated Lee Sedol, one of the world's highest rated Go players. After that success DeepMind's neural network that had proved to have super-human problem-solving abilities went on to tackle other realms and we have reported its success in biology, quantum chemistry and meteorology.
Recently DeepMind turned its attention to computer science and has developed an AI known as AlphaCode and has now reported the results of its first foray into competitive programming, something that is popular among both professionals and amateurs and until now has been the preserve of humans.
In its blog post Competitive programming with AlphaCode, DeepMind AlphaCode explains how it used transformer-based language models to generate code and then filtered the resulting code to select a small set of candidates in Python and C++ to execute and evaluate. The two stages of the training process were pretraining in selected public GitHub code followed by fine-tuning on a dataset with programming problems from five sources including HackerEarth and CodeForces. These problems have lengthy problem descriptions and come together with test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages. Then to validate its performance AlphaCode made submissions to 10 competitions on Codeforces using the identities SelectorUnlimited, WaggleCollide and AngularNumeric.
Here is one of the accepted submissions made by SelectorUnlimited in Python to the recent Educational Codeforeces Round 118, which was rated for Division 2, that is for competitors rated less than 1900.
The blog post on CodeForces announcing AlphaCode's success reveals that had these accounts participated in real competitions their rating would be about 1300 and comments:
In 1997 Kasparov played against (and lost) the supercomputer DeepBlue. Perhaps we will be witnessing a confrontation between tourist [the platform's top rated contestant with a rating of 3809] and AI in near future.
The DeepMind blog quotes Mike Mirzayanov, the founder of Codeforces:
I can safely say the results of AlphaCode exceeded my expectations. I was sceptical because even in simple competitive problems it is often required not only to implement the algorithm, but also (and this is the most difficult part) to invent it. AlphaCode managed to perform at the level of a promising new competitor. I can't wait to see what lies ahead!
But is that really a function defined within a for loop in the example above? Does this reflect poorly on AlphaCode, or should we blame the coders it learned from?
or email your comment to: email@example.com
|Last Updated ( Wednesday, 06 April 2022 )