| How Can You Not Be Impressed By AI? |
| Written by Mike James | |||
| Wednesday, 10 December 2025 | |||
|
There is a big backlash against AI at the moment and given the threat it poses to jobs. this can hardly be an unexpected response. However, much of the backlash focuses on how useless and unimpressive it is. This is crazy. AI has achieved so much of its goal in such a short time, this is an entirely untenable position. Mustafa Suleyman, Microsoft's AI CEO, recently said: "Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming. I grew up playing Snake on a Nokia phone! The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me." This position has been echoed by many company CEOs, including Jensen Huang and Bill Gates. What you need to realize is that they are not wrong, but they might not be talking about what you think they are. The tech CEOs that are blown away by AI have been following the miserable progress that AI made in the years before the era of the large deep neural network. Back in the day, when I was a wet behind the ears AI researcher, our goals were very simple. In an afternoon daydream we may well have considered where we were going. We then thought that the whole point was artificial vision, translation, reasoning and so on. The jobs we thought might be eliminated were boring enough to deserve elimination. Most of the time we didn't think of jobs that involved extensive human interaction, like call centers and so on, as being under threat and we certainly didn't think that programming would be impacted. Our aims were pathetically simple - to automatically read a car number plate, to recognize a face among a small set of faces. to perform translation between one language and another, to implement OCR, and so on. This was not the AI of world domination and certainly not that of Sky Net. The first neural network I implemented had just 20 neurons in three layers. The time it took to train was fairly short, but this was mostly because I simply didn't have very much training data. The mainframe I used to run the network would have easily been beaten for power by any 2G mobile phone. The result worked reasonably well on the training data, but not so well on new data. It wasn't clear if these were fundamental problems or simply due to the inadequacies of our data and implementation. To be clear, at the time it seemed reasonable that neural networks should be able to do the job, but nobody knew if there was another really deep idea that we were missing. Put simply, we had no idea if neural networks were all we needed. This was the dilemma for most AI researchers at the time. Pioneers like Geoffrey Hinton worked on at different extensions of the neural network, like the infamous Boltzman machine that was promising, but even harder to train. Of course, we now know that this was unnecessary as the solution was the plain old neural network plus some extras to make feedforward networks do the job of recurrent networks and, of course, lots and lots of data and computing power. Even if back then we had had the computing power, without the Internet, and specifically the web, we couldn't have sourced the amount of data needed to train such big networks. Instead of neural networks we were trying computationally-cheaper approaches based on symbolic reasoning that didn't need as much data for training - in fact they were more constructed than trained. Most of the time they didn't do the job because the construction of intelligence simply works by borrowing some human intelligence rather than creating anything new. There were some attempts at scaling the symbolic approach, but again they ware doomed because of the lack of computational power and the huge structured data sets that were needed. But I am getting away from my original point. Despite a lot of effort, AI was only slightly impressive. Our vision models could just about do OCR and perhaps read a number plate. Our language models always promised more than they delivered and the best performing models we had really were chatbots in the style of the original Eliza, i.e a set of simple rules that fooled innocent humans. The best you could say is that we had some toys, but nothing too serious, even if companies tried to tell us that their translation software, expert system or whatever was up to the job. We sort of felt that progress was being made, but it was still a long road to anything useful. Most AI projects of the time started with a lot of hope that quickly fell away to be replaced by an appreciation of everything that was wrong with the approach. If a time traveling genie had given us a glimpse of even the poorest of today's large language models, that we now seem to take for granted, we would have been astounded. Yes, our minds would have been well and truly blown and, what is more, we would have assumed that the AI problem was well and truly solved. Of course, we know that it isn't solved, but how can you not be impressed by where we are.
Related ArticlesAnthropic Says Claude Sonnet 4.5 Is World's Best Coding Model If You Sleep Well Tonight You May Not Have Understood Hinton, LeCun and Bengio Receive 2018 Turing Award The Unreasonable Effectiveness Of GPT-3
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
Comments
or email your comment to: comments@i-programmer.info |
|||
| Last Updated ( Wednesday, 10 December 2025 ) |

