2018

(For results see bottom of page)

Context

We are pleased to announce the next iteration of the Runtime Verification Challenge. From 2014 to 2016 this ran as a competition comparing RV tools, principally on runtime performance, in three tracks based on the programming language of application. Following a hiatus in 2017 (holding the RV-CuBES workshop instead) we are announcing a more foundational RV Challenge for 2018. Modern software and cyber-physical systems require runtime verification, yet the burgeoning collection of RV technologies remain comparatively untested due to a dearth of benchmarks for objectively comparatively evaluating their performance. This is not for a lack of effort; it is due to a glaring gap in our understanding of what the benchmarks would need to look like, and of what exactly we need to measure. Therefore in 2018 we will host an RV Benchmark Competition to take the first major steps in filling this gap.

Challenge Categories

The challenge will be formed of two categories:

  • The MTL category. This sets a single format for traces and specifications and aims to allow objective comparison of most RV tools
  • The Open category. This is a very flexible category allowing benchmarks of many different forms and allows benchmarks to be submitted with minimal work

Submissions to either category must provide a benchmark package consisting of trace data, specification information, and an oracle (what the result should be) as well as a brief overview paper describing their submission.

For full details of submission structure and how to submit please refer to the rules document.

Evaluation

As explained in the rules document, submissions will be judged by an independent panel of experts and awards will be made in a number of categories. Evaluation will take place during the 18th International Conference on Runtime Verification

Timeline
Note that the timeline has been significantly revised with respect to the original timeline

Initial submissions must be made by 31st October and must be fixed by 5th November.

All submissions will be invited to present during a special session at the 18th International Conference on Runtime Verification (Nov 11-13). However, attendance at this event is not a requirement of taking part in the challenge.

Results

Thank you to all those we took part in the competition. All submitted entries can be found in this GitHub repository. There were two submissions in the MTL category and seven submissions in the Open category. All submissions were scored by a panel of 15 RV experts (including a representative from each submission) using the scoring criteria available here.

In the MTL category, the winner was Herra: A Prototype Tool for Online Runtime Verification Benchmark Generation by Josh Wallin and Kristin Rozier.

In the Open category, there was a three-way tie between three submissions that scored within 1% of each other. Each submission received the highest score in one of the 3 scoring criteria. The three winners were (in no order):

  • The DejaVu Runtime Verification Benchmark by Klaus Havelund, Doron Peled and Dogan Ulus.
  • ARTiMon Monitoring Tool The Hybrid Engine Cycles Benchmark by Nicolas Rapin.
  • Stream Characteristics for First-Order Monitoring by Joshua Schneider and Srdan Krstic.

See here for the slides from RV presenting the results and a selection of submissions.

Organisers

Giles Reger, University of Manchester (Chair)
Kristin Yvonne RozierIowa State University
Volker StolzWestern Norway University of Applied Sciences

Contact Giles for queries relating to rules and submission.