Google Launches Fuzzer Benchmarking Service
Thursday, 12 March 2020

Google has launched FuzzBench, an automated free service for evaluating fuzzers. Google says goal of FuzzBench is to make it painless to rigorously evaluate fuzzing research and make fuzzing research easier for the community to adopt

Fuzzing is an automated way of testing software by passing malformed data to an app to see how it is treated. Google says it has found tens of thousands of bugs with fuzzers like LibFuzzer and AFL, and that while there's plenty of research suggesting improvements for the tools, it isn't clear how well the suggested improvements would work in practice.


Google developers think there are shortcomings such as not using large and diverse set of real world benchmarks, or having too few or too short trials. They point out that:

"this is understandable since full scale experiments can be prohibitively expensive for researchers. For example, a 24-hour, 10-trial, 10 fuzzer, 20 benchmark experiment would require 2,000 CPUs to complete in a day."

FuzzBench is designed to help solve these issues. FuzzBench is described as providing a framework for painlessly evaluating fuzzers in a reproducible way. Fuzzbench is described on its GitHub page as having an easy API with benchmarks from real-world projects and a reporting library to produce graphics and statistical tests designed to help developers you understand the significance of tests.

To use FuzzBench, researchers integrate a fuzzer and FuzzBench will run an experiment for 24 hours with many trials and real world benchmarks. Based on data from this experiment, FuzzBench will produce a report comparing the performance of the fuzzer to other fuzzers, along with measures of the strengths and weaknesses of each fuzzer. The hope is that this will mean researchers can spend their time working on improving fuzzing techniques rather than setting up evaluations and dealing with existing fuzzers.

The Google team says most integrations are less than 50 lines of code, and once a fuzzer is integrated, it can fuzz almost all 250+ OSS-Fuzz projects out of the box. The team has already integrated ten fuzzers, including AFL, LibFuzzer, Honggfuzz, and several academic projects such as QSYM and Eclipser.

Reports include statistical tests to give an idea how likely it is that performance differences between fuzzers are simply due to chance, as well as the raw data so researchers can do their own analysis.


More Information

FuzzBench On GitHub

Related Articles

Microsoft Launches Cloud Fuzzing Service

New tool detects RegEx security weakness

Tactical Pentesting With Burp Suite


To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook or Linkedin.


Computer History Under the Hammer

If you crave for a slice of computer history, an online auction from Bonhams salerooms in Los Angeles on November 5th provides plenty of choice. If you don't have deep enough pockets, just browsing th [ ... ]

Nvidia's AI Supercomputer For Medical Research And Drug Discovery

Last month Nvidia unveiled plans to build a supercomputer intended for AI research in health care. This prompts us to look at AI's potential role in health care and how it is already being used.

 [ ... ]

More News

{laodposition comment}

Last Updated ( Thursday, 12 March 2020 )