It is difficult to know what sort of breakthrough this might be, but it is an amazing video. Two quadrotors balance a pole, an inverted pendulum and then play toss and catch.
Quadrotors seem to be the platform of choice for anyone wanting to demonstrate that they can solve dynamical equations in realtime. It makes sense because you have control of movement in 3D plus orientation and an ability to react fast enough to make use of the calculations.
In this case the quadrotors start out by balancing a pole, an inverted pendulum and then, just as you think the cool trick is enough for a round of applause, the first quadrotor tosses the pole and the second catches it and continues to balance it!
The video explains some of the methods used to perform the trick, but as well as solving the equations that describe the dynamics the whole thing required some choreography and planning. You need to work out exactly how to throw the pole so that it is catchable. It is easy to work out trajectories for the thrower that ends up with the pole spinning in an uncatchable state. Similarly you need to plan how the catching quadrotor can position itself to intercept the pole in a position where regaining stability is fairly easy. Notice that human jugglers do these planning operations automatically and so there is some way to go before we can have quadrotors juggling arbitrary objects in the same way.
Another issue is the accuracy of the dynamic model. You might have equations that determine the motion but these only correspond to reality if you input the correct inertia tensor for the pole and other parameters. Some of the fine tuning of the behavior is left to a machine learning algorithm so that the quadrotors get better at the trick the more they practice.
The Micro:bit is unique as an educational platform - easy to use and supported by easy languages, but what you might not have noticed is that it is built on top of the ARM mbed IoT software and now it [ ... ]
The only real problem with the previous release of Google's TensorFlow was that it would only work on one computer. Training neural networks is computationally intensive so the good news is that the l [ ... ]