Webkit uses a JIT compiler that applies three stages of optimization.
The first stage is a standard interpreter that makes sure that the JavaScrpt code runs without any delay. It may not run as fast as possible but it has low latency which is usually what users want. If code is called more than six times or if it loops more than 100 times then it moves on to the next stage and is JIT compiled without much optimization. If it is called more than 60 times or loops more than 1000 times then an optimizer is applied. The speed gained by the optimized code is about a factor of 3 faster than the JIT and 30 times faster than the interpreter.
The optimizer was good, but it didn't do the sort of job that a C or C++ optimizer does in optimizing register layouts and so on. Instead of implementing its own fourth stage, the WebKit team decided to use the existing LLVM code optimization layer.
The LLVM fourth stage is called FTL, which is clever because most people think "Faster Than Light" and not "Fourth Tier LLVM". The results are impressive - about 40 times faster than the interpreter and 3 times faster than the original optimized JIT.
You can read more about how the optimizer works, and how the time that the LLVM backend takes was reduced, in the original blog post - see More Information.
FTL is currently in WebKit nightly builds and soon you can expect to see it in release versions.
In many ways this is a case of WebKit catching up with Chrome and Firefox. Could it be that Chrome and Firefox could get similar improvements by using the state-of-the-art LLVM backend optimizer?
Python is the topic of this week's catch of blog posts that might otherwise have escaped your notice. There are a couple of courses, a round up of books, a couple of servings of Django, Python compare [ ... ]
After months of previews, betas and Release Candidates Android Studio 2.1 is now generally available and we will all move to the new version as a matter of course. But for those who adopted the e [ ... ]