Jul 282015

The paper Enforcement of (Timed) Properties with Uncontrollable Events has been accepted to ICTAC 2015. Here is the abstract to the paper:

This paper deals with runtime enforcement of untimed and timed properties with uncontrollable events. Runtime enforcement consists in modifying the executions of a running system to ensure their correctness with respect to a desired property. We introduce a framework that takes as input any regular (timed) property over an alphabet of events, with some of these events being uncontrollable. An uncontrollable event cannot be delayed nor intercepted by an enforcement mechanism. Enforcement mechanisms satisfy important properties, namely soundness and compliance – meaning that enforcement mechanisms output correct executions that are close to the input execution. We discuss the conditions for a property to be enforceable with uncontrollable events, and we define enforcement mechanisms that modify executions to obtain a correct output, as soon as possible. Moreover, we synthesize sound and compliant descriptions of runtime enforcement mechanisms at two levels of abstraction to facilitate their design and implementation.

This is joint work with M. Renard and A. Rollet from LaBRI, S. Pinisetty from U of Aalto, T. Jéron and H. Marchand from Inria Rennes.

Jun 112015

The paper Monitoring Electronic Exams has been accepted for publication in the proceedings of RV 2015, the 15th international conference on Runtime Verification.

Here is the abstract of the paper:

Universities and other educational organizations are adopting com- puter-based assessment tools (herein called e-exams) to reach larger and ubiquitous audiences. While this makes examination tests more accessible, it exposes them to unprecedented threats not only from candidates but also from authorities, which organize exams and deliver marks. Thus, e-exams must be checked to detect potential irregularities. In this paper, we propose several monitors, expressed as Quantified Event Automata (QEA), to monitor the main properties of e-exams. Then, we implement the monitors using MarQ, a recent Java tool designed to support QEAs. Finally, we apply our monitors to logged data from real e-exams conducted by Université Joseph Fourier at pharmacy faculty, as a part of Epreuves Classantes Nationales informatisées, a pioneering project which aims to realize all french medicine exams electronically by 2016. Our monitors found discrepancies between the specification and the implementation.

This is joint work with Ali Kassem and Pascal Lafourcade from Univ. Grenoble Alpes, Vérimag.

Jun 112015

The paper TiPEX: a tool chain for Timed Property Enforcement during eXecution has been accepted for publication in the proceedings of RV 2015, the 15th international conference on Runtime Verification.

Here is the abstract of the paper below.

The TiPEX tool implements the enforcement monitoring algorithms for timed properties proposed in [1]. Enforcement monitors are generated from timed automata specifying timed properties. Such monitors correct input sequences by adding extra delays between events. Moreover, TiPEX also provides modules to generate timed automata from patterns, compose them, and check the class of properties they belong to in order to optimize the monitors. This paper also presents the performance evaluation of TiPEX within some experimental setup.

This is joint work with S. Pinisetty, T. Jéron, and H. Marchand from Inria Rennes.

TiPEX is available at the following URL:

http: //srinivaspinisetty.github.io/Timed-Enforcement-Tools/

[1] Srinivas Pinisetty, Yliès Falcone, Thierry Jéron, Hervé Marchand, Antoine Rollet, Omer Nguena-Timo:
Runtime enforcement of timed properties revisited. Formal Methods in System Design 45(3): 381-422 (2014)

Jan 232015

Today at 1:30pm, Srinivas Pinisetty will defend his Ph.D. thesis on Runtime Enforcement of Timed Properties at Inria Rennes, Salle Métivier.


  • Martin LEUCKER
  • Didier LIME
  • Sophie PINCHINAT
  • Frédéric HERBRETEAU
  • Yliès FALCONE
  • Thierry JÉRON
  • Hervé MARCHAND

Here is the abstract of the thesis:

Runtime enforcement is a verification/validation technique aiming at correcting possibly incorrect executions of a system of interest. It is a powerful technique to ensure that a running system satisfies some desired properties. Using an enforcement monitor, an (untrustworthy) input execution (in the form of a sequence of events) is modified into an output sequence that complies with a property. Over the last decade, runtime enforcement has been mainly studied in the context of untimed properties.

In this thesis, we consider enforcement monitoring for systems where the physical time elapsing between actions matters. Executions are thus modeled as sequences of events composed of actions with dates (called timed words). We consider runtime enforcement for timed specifications modeled as timed automata, in the general case of regular timed properties. The proposed enforcement mechanism has the power of both delaying events to match timing constraints, and suppressing events when no delaying is appropriate, thus allowing the enforcement mechanisms and systems to continue executing.

To ease their design and correctness-proof, enforcement mechanisms are described at several levels: enforcement functions that specify the input-output behavior in terms of transformations of timed words, constraints that should be satisfied by such functions, enforcement monitors that describe the operational behavior of enforcement functions, and enforcement algorithms that describe the implementation of enforcement monitors. The feasibility of enforcement monitoring for timed properties is validated by prototyping the synthesis of enforcement monitors from timed automata. We also show the usefulness of enforcement monitoring of timed specifications for several application-domains.

I have unofficially supervised Srinivas’ Ph.D. thesis and we actively collaborated during the three years of his Ph.D. work.


  • Thesis: link.
  • Webpage of the defense: link.
  • Video of the defense: link.
Jan 042015

Tomorrow, I wil start a 2-week series of lectures at CETIC (the African Center of Excellence in computer science), Yaounde, Cameroon.

The subjects of the lecture are the semantics of programming languages and the design of compilers. It is based on the lecture given at Univ. Grenoble Alpes in the international master, but more practice oriented. In particular, the students will have to design a mini-compiler.

Many thanks to Prof. Maurice Tchuenté for the invitation. Many thanks to Prof. Georges Kouamou, Prof. Jacques Tagoudjeu, and Prof. Maurice Tchuenté for the warm welcome.

Thanks to Inria and Prof. Jean-François Méhaut for the support.

logo         logo-inria

Dec 172014

Today is officially the first day of the COST Action Runtime Verification beyond Monitoring (ARVI).

A meeting of the Management Committee (MC) will be held in COST offices in Belgium.

I am representing France in the MC.

Below is a description of the COST action:

Runtime verification (RV) is a computing analysis paradigm based on observing a system at runtime to check its expected behavior. RV has emerged in recent years as a practical application of formal verification, and a less ad-hoc approach to conventional testing by building monitors from formal specifications.
There is a great potential applicability of RV beyond software reliability, if one allows monitors to interact back with the observed system, and generalizes to new domains beyond computers programs (like hardware, devices, cloud computing and even human centric systems). Given the European leadership in computer based industries, novel applications of RV to these areas can have an enormous impact in terms of the new class of designs enabled and their reliability and cost effectiveness.

EU COST Action IC1402 — project overview at the EU web-site.


Dec 152014

Runtime Verification is a verification technique for the analysis of software at execution-time based on extracting information from a running system and checking if the observed behaviors satisfy or violate the properties of interest. During the last decade, many important tools and techniques have been developed and successfully employed. However, there is a pressing need to compare such tools and techniques, since we currently lack a common benchmark suite as well as scientific evaluation methods to validate and test new prototype runtime verification tools.

The main aims of CRV-2015 are to:

  • Stimulate the development of new efficient and practical runtime verification tools and the maintenance and improvement of the already developed ones.
  • Produce a benchmark suite for runtime verification tools, by sharing case studies and programs that researchers and developers can use in the future to test and to validate their prototypes.
  • Discuss the metrics employed for comparing the tools.
  • Provide a comparison of the tools on different benchmarks and evaluate them using different criteria.
  • Enhance the visibility of presented tools among the different communities (verification, software engineering, cloud computing and security) involved in software monitoring.

CRV-2015 Organizers

Please direct any enquiries to the competition co-organizers (crv15.chairs _AT_ imag _DOT_ fr)

Yliès Falcone (Université Joseph Fourier, France).
Dejan Nickovic (AIT Austrian Institute of Technology GmbH, Austria).
Giles Reger (University of Manchester, UK).
Daniel Thoma (University of Luebeck, Germany).

CRV-2015 Jury

The CSRV Jury will include a representative for each participating team and the competition chairs. The Jury will be consulted at each stage of the competition to ensure that the rules set by the competition chairs are fair and reasonable.

Call for Participation

The main goal of CRV 2015 is to compare tools for runtime verification. We invite and encourage the participation with benchmarks and tools for the competition.The competition will consist of three main tracks based on the input language used:

  • Track on monitoring Java programs (online monitoring).
  • Track on monitoring C programs (online monitoring).
  • Track on monitoring of traces (offline monitoring).

The competition will follow three phases:

Benchmarks/Specification collection phase – the participants are invited to submit their benchmarks (C or Java programs and/or traces). The organizers will collect them in a common repository (publicly available). The participants will then train their tools using the shared benchmarks.
Monitor collection phase – the participants are invited to submit their monitors. The participants with the tools/monitors that meet the qualification requirements will be qualified for the evaluation phase.
Evaluation phase – the qualified tools will be evaluated on the submitted benchmarks and they will be ranked using different criteria (i.e., memory utilization, CPU utilization, …). The final results will be presented at the RV 2015 conference.

More information is available at: http://rv2015.conf.tuwien.ac.at/?page_id=276