• Lynx: The First Video-Enabled Humanoid Robot with Amazon Alexa
• NVIDIA Self-Driving Car Demo at CES 2017
Sometimes the news is reported well enough elsewhere and we have little to add other than to bring it to your attention.
No Comment is a format where we present original source information, lightly edited, so that you can decide if you want to follow it up.
Motion-Planning Chip Speeds Robots
I don't know about you, but whenever I view a robot video I look for the x10 or so in the corner indicating that the video has been speeded up. It is good that the video is speeded, but even at the higher speed the movements can look ponderous. A robot moves its foot so slowly into an obviously stable configuration that you just want to shout "get on with it". Of course, the problem is that what looks obvious to us is reducible to a huge number of tedious computations which take a lot of time. The solution is obvious in that all we have to do is implement the software in hardware and let the robot move at a reasonable speed. The big problem is what software do you put in the hardware.
Researchers at Duke University have implemented a custom processor that does motion planning, and collision detection in particular.
If you watch the video, there is a good explanation of what is going on and you will be impressed by how quickly the robot arm leaps into action. It is what we need if robot arms are to become more useful as part of more compete domestic robots.
Lynx: The First Video-Enabled Humanoid Robot with Amazon Alexa
This year's big hit is going to be voice control. It is the new way for users to interact with hardware and the market leader at the moment is Alexa. You can tell that this is a trend because of the number of popular news stories about silly way is which people accidentally interact with Alexa by mentioning it in passing without meaning to trigger an activity. The latest story is about Alexa buying things in response to overhearing references to them on the radio.
If voice input is going to be big it is because of the casual way it lets us interact with hardware and what more sophisticated hardware is there than a robot. Now Ubtech has what it claims is the first humanoid robot that can be commanded using Alexa. If you have had a look at the Alexa Skills API you will know that this isn't rocket science and the robot is unlikley to be the first, but it is still interesting. Take a look at the promo video:
There also seems to be a new law for 2017 - all small humanoid robots are copies of Nao and all small emotional robots are copies of Jibo.
Self driving cars probably won't be a thing in 2017, but that isn't going to stop us looking at the current demos and wondering why not. There seem to be two big approaches to ??self driving. Google seems to be going in for engineering combined with clever algorithms. It creates high quality maps and handcrafts algorithms to specify how the car should behave. This is safe, but very tedious and very slow to develop. Companies like NVIDIA on the other hand seem to be embracing the wider AI algorithm and have neural networks learning how to drive in a range of situations. This isn't as safe because there is no way of knowing exactly what a neural network will do in situations it hasn't encountered before. All we can do is build up trust - much like with a human driver.
Here is NVIDIA's latest promo video showing its BB8 autonomous car:
As I said earlier, if they keep showing us promos like this then the next question they have to answer is why isn't the self driving car here and ready for me to use?
There is a lot of flexibility in how you can configure a JPEG file to best represent an image. Now Google's Guetzli can find optimum settings and so produce files that are up to 45% than other encoder [ ... ]