CRV 2015 – The 2nd International Competition on Runtime Verification

Runtime Verification is a verification technique for the analysis of software at execution-time based on extracting information from a running system and checking if the observed behaviors satisfy or violate the properties of interest. During the last decade, many important tools and techniques have been developed and successfully employed. However, there is a pressing need to compare such tools and techniques, since we currently lack a common benchmark suite as well as scientific evaluation methods to validate and test new prototype runtime verification tools.

The main aims of CRV-2015 are to:

  • Stimulate the development of new efficient and practical runtime verification tools and the maintenance and improvement of the already developed ones.
  • Produce a benchmark suite for runtime verification tools, by sharing case studies and programs that researchers and developers can use in the future to test and to validate their prototypes.
  • Discuss the metrics employed for comparing the tools.
  • Provide a comparison of the tools on different benchmarks and evaluate them using different criteria.
  • Enhance the visibility of presented tools among the different communities (verification, software engineering, cloud computing and security) involved in software monitoring.

CRV-2015 Organizers

Please direct any enquiries to the competition co-organizers (crv15.chairs _AT_ imag _DOT_ fr)

Yliès Falcone (Université Joseph Fourier, France).
Dejan Nickovic (AIT Austrian Institute of Technology GmbH, Austria).
Giles Reger (University of Manchester, UK).
Daniel Thoma (University of Luebeck, Germany).

CRV-2015 Jury

The CSRV Jury will include a representative for each participating team and the competition chairs. The Jury will be consulted at each stage of the competition to ensure that the rules set by the competition chairs are fair and reasonable.

Call for Participation

The main goal of CRV 2015 is to compare tools for runtime verification. We invite and encourage the participation with benchmarks and tools for the competition.The competition will consist of three main tracks based on the input language used:

  • Track on monitoring Java programs (online monitoring).
  • Track on monitoring C programs (online monitoring).
  • Track on monitoring of traces (offline monitoring).

The competition will follow three phases:

Benchmarks/Specification collection phase – the participants are invited to submit their benchmarks (C or Java programs and/or traces). The organizers will collect them in a common repository (publicly available). The participants will then train their tools using the shared benchmarks.
Monitor collection phase – the participants are invited to submit their monitors. The participants with the tools/monitors that meet the qualification requirements will be qualified for the evaluation phase.
Evaluation phase – the qualified tools will be evaluated on the submitted benchmarks and they will be ranked using different criteria (i.e., memory utilization, CPU utilization, …). The final results will be presented at the RV 2015 conference.

More information is available at: http://rv2015.conf.tuwien.ac.at/?page_id=276