Nov 292016
 

SAC-SVT 2017

Software Verification and Testing 2017
A Track of the ACM Symposium on Applied Computing

Marrakech, Morocco, April 3 – 7, 2017

http://antares.sip.ucm.es/svt2017/


Scope

For the past thirsty years, the ACM Symposium on Applied Computing has been a primary gathering forum for applied computer scientists, computer engineers, software engineers, and application developers from around the world. SAC 2017 is sponsored by the ACM Special Interest Group on Applied Computing (SIGAPP), and will be hosted by the University of Quebec (Montreal, Canada), University Cadi Ayyad (Marrakech, Morocco), Mohamed V University of Rabat – Mohammadia School Of Engineers (Rabat, Morocco) and National School of Applied Sciences (Kenitra, Morocco).
Symposium on Applied Computing been a primary gathering forum for applied computer scientists, computer engineers, software engineers, and application developers from around the world over the past thirty years.
The Software Verification and Testing track aims at contributing to the challenge of improving the usability of formal methods in software engineering. The track covers areas such as formal methods for verification and testing, based on theorem proving, model checking, static analysis, and run-time verification. These areas include a broad range of topics, but are not limited to:

  • Model checking
  • Theorem proving
  • Correct by construction development
  • Model-based testing
  • Verification-based testing
  • Symbolic execution
  • Static and run-time analysis
  • Abstract interpretation
  • Analysis methods for dependable systems
  • Software certification and proof carrying code
  • Fault diagnosis and debugging
  • Verification of large scale software systems
  • Real world applications and case studies applying software verification

Registration Information

Please note the following important dates regarding registration.

Author Registration

Friday, December 16, 2016

Early Registration

Friday, February 3, 2017

SAC-SVT day

1 day during April 3 – 7, 2017

 

For authors. Paper registration is required, allowing the inclusion of the paper/poster in the conference proceedings. An author or a proxy attending SAC MUST present the paper: This is a requirement for the paper/poster to be included in the ACM/IEEE digital library. No-show of scheduled papers and posters will result in excluding them from the ACM/IEEE digital library.

Accepted Papers

The following papers were accepted by the SAC-SVT 2017 Programme Committee.

  • Mohamed Amine Aouadhi, Benoît Delahaye and Arnaud Lanoix.
    Moving from Event-B to Probabilistic Event-B
  • Huu-Vu Nguyen and Tayssir Touili.
    CARET Model Checking For Pushdown Systems
  • Rudolf Ramler, Thomas Wetzlmaier and Claus Klammer.
    An Empirical Study on the Application of Mutation Testing for a Safety-Critical Industrial Software System 
  • Thibaud Antignac, Mukelabai Mukelabai and Gerardo Schneider.
    Specification, Design and Verification of an Accountability-Aware Surveillance Protocol
  • Davide Basile, Felicita Di Giandomenico and Stefania Gnesi.
    Statistical Model Checking of an Energy-Saving Cyber-Physical System in the Railway Domain
  • Jeongho Kim and Eunseok Lee.
    History-based Test Case Prioritization for Failure Information
  • Yunus Kiliç and Hasan Sözer.
    Generating Runtime Verification Specifications based on Static Code Analysis Alerts
  • Pedro Delgado-Pérez, Inmaculada Medina-Bulo, Sergio Segura, Antonio García-Domínguez and Juan José Domínguez-Jiménez.
    GiGAn: Evolutionary Mutation Testing for C++ Object-Oriented Systems
  • András Márki and Birgitta Lindström.
    Mutation Tools for Java

Venue and Travel Information

Information about the venue and how to travel to Marrakesh is available on SAC 2017 Website at:

http://www.sigapp.org/sac/sac2017/

marrakech

Aug 312016
 

The deadline for submitting paper to the 32nd Annual ACM Symposium on Applied Computing Software Verification and Testing Track, April 3 – 7, 2017, Marrakech, Morocco, is in two weeks.
More information:

  • http://http://antares.sip.ucm.es/svt2017/, and
  • http://www.sigapp.org/sac/sac2017/
Jan 232015
 

Today at 1:30pm, Srinivas Pinisetty will defend his Ph.D. thesis on Runtime Enforcement of Timed Properties at Inria Rennes, Salle Métivier.

Jury:

  • Martin LEUCKER
  • Didier LIME
  • Sophie PINCHINAT
  • Frédéric HERBRETEAU
  • Yliès FALCONE
  • Thierry JÉRON
  • Hervé MARCHAND

Here is the abstract of the thesis:

Runtime enforcement is a verification/validation technique aiming at correcting possibly incorrect executions of a system of interest. It is a powerful technique to ensure that a running system satisfies some desired properties. Using an enforcement monitor, an (untrustworthy) input execution (in the form of a sequence of events) is modified into an output sequence that complies with a property. Over the last decade, runtime enforcement has been mainly studied in the context of untimed properties.

In this thesis, we consider enforcement monitoring for systems where the physical time elapsing between actions matters. Executions are thus modeled as sequences of events composed of actions with dates (called timed words). We consider runtime enforcement for timed specifications modeled as timed automata, in the general case of regular timed properties. The proposed enforcement mechanism has the power of both delaying events to match timing constraints, and suppressing events when no delaying is appropriate, thus allowing the enforcement mechanisms and systems to continue executing.

To ease their design and correctness-proof, enforcement mechanisms are described at several levels: enforcement functions that specify the input-output behavior in terms of transformations of timed words, constraints that should be satisfied by such functions, enforcement monitors that describe the operational behavior of enforcement functions, and enforcement algorithms that describe the implementation of enforcement monitors. The feasibility of enforcement monitoring for timed properties is validated by prototyping the synthesis of enforcement monitors from timed automata. We also show the usefulness of enforcement monitoring of timed specifications for several application-domains.

I have unofficially supervised Srinivas’ Ph.D. thesis and we actively collaborated during the three years of his Ph.D. work.

Edit:

  • Thesis: link.
  • Webpage of the defense: link.
  • Video of the defense: link.
Dec 172014
 

Today is officially the first day of the COST Action Runtime Verification beyond Monitoring (ARVI).

A meeting of the Management Committee (MC) will be held in COST offices in Belgium.

I am representing France in the MC.

Below is a description of the COST action:

Runtime verification (RV) is a computing analysis paradigm based on observing a system at runtime to check its expected behavior. RV has emerged in recent years as a practical application of formal verification, and a less ad-hoc approach to conventional testing by building monitors from formal specifications.
There is a great potential applicability of RV beyond software reliability, if one allows monitors to interact back with the observed system, and generalizes to new domains beyond computers programs (like hardware, devices, cloud computing and even human centric systems). Given the European leadership in computer based industries, novel applications of RV to these areas can have an enormous impact in terms of the new class of designs enabled and their reliability and cost effectiveness.

EU COST Action IC1402 — project overview at the EU web-site.

cropped-Logo_cost-arvi_transp

Dec 152014
 

Runtime Verification is a verification technique for the analysis of software at execution-time based on extracting information from a running system and checking if the observed behaviors satisfy or violate the properties of interest. During the last decade, many important tools and techniques have been developed and successfully employed. However, there is a pressing need to compare such tools and techniques, since we currently lack a common benchmark suite as well as scientific evaluation methods to validate and test new prototype runtime verification tools.

The main aims of CRV-2015 are to:

  • Stimulate the development of new efficient and practical runtime verification tools and the maintenance and improvement of the already developed ones.
  • Produce a benchmark suite for runtime verification tools, by sharing case studies and programs that researchers and developers can use in the future to test and to validate their prototypes.
  • Discuss the metrics employed for comparing the tools.
  • Provide a comparison of the tools on different benchmarks and evaluate them using different criteria.
  • Enhance the visibility of presented tools among the different communities (verification, software engineering, cloud computing and security) involved in software monitoring.

CRV-2015 Organizers

Please direct any enquiries to the competition co-organizers (crv15.chairs _AT_ imag _DOT_ fr)

Yliès Falcone (Université Joseph Fourier, France).
Dejan Nickovic (AIT Austrian Institute of Technology GmbH, Austria).
Giles Reger (University of Manchester, UK).
Daniel Thoma (University of Luebeck, Germany).

CRV-2015 Jury

The CSRV Jury will include a representative for each participating team and the competition chairs. The Jury will be consulted at each stage of the competition to ensure that the rules set by the competition chairs are fair and reasonable.

Call for Participation

The main goal of CRV 2015 is to compare tools for runtime verification. We invite and encourage the participation with benchmarks and tools for the competition.The competition will consist of three main tracks based on the input language used:

  • Track on monitoring Java programs (online monitoring).
  • Track on monitoring C programs (online monitoring).
  • Track on monitoring of traces (offline monitoring).

The competition will follow three phases:

Benchmarks/Specification collection phase – the participants are invited to submit their benchmarks (C or Java programs and/or traces). The organizers will collect them in a common repository (publicly available). The participants will then train their tools using the shared benchmarks.
Monitor collection phase – the participants are invited to submit their monitors. The participants with the tools/monitors that meet the qualification requirements will be qualified for the evaluation phase.
Evaluation phase – the qualified tools will be evaluated on the submitted benchmarks and they will be ranked using different criteria (i.e., memory utilization, CPU utilization, …). The final results will be presented at the RV 2015 conference.

More information is available at: http://rv2015.conf.tuwien.ac.at/?page_id=276