The SVC project is devoted to fundamental and applied research in the area of system validation for telematics systems. System validation is concerned with techniques and tools for the (automated) analysis of the (functional) correctness of specifications, designs and products. In the project validation methods and tools will be developed, applied, evaluated and improved on the basis of real-life applications. Important validation techniques that will be used in the project are simulation, model checking, verification and testing.
This project aims to secure the position of the Telematics Institute with respect to strategic expertise in the area of system validation. To achieve this the project builds on existing activities with a proven track-record in applicability. It is intended that in the course of the project new spin-off projects will be started in the form dedicated validation studies for organisations with specific validation needs. Such special studies will be carried out as separate contract research. This can ultimately lead to a Systems Verification Centre offering state-of-the-art validation support as a standard TI service.
To coordinate the interaction with industry the project has a steering committee consisting of industrial experts that have an interest in applying validation methods and tools within their organisations. Initially, the application of validation methods will be studied on the basis of jointly selected case studies. Later feedback will be based on the aforementioned larger spin-off validation projects. The project will also carry out pro-active studies into generally applicable state-of-the art validation techniques, maintaining working sets of succesful tools and techniques, and awareness of similar projects and developments elsewhere. As a centre of expertise for system validation SVC will also actively promote knowledge and technology transfer to industry in the area of validation methods and tools through publications, presentations, demonstrations, courses and workshops.
The research partners of this project are the CTIT, CWI and Telematics Institute. Industrial support from with the Telematics Institute consortium has been obtained from CMG, IBM, KPN and Lucent. Additional support has been obtained from Rijkswaterstaat and Holland Railconsult.
The SVC project aims to achieve several goals at a time, all of which are directly in line with the aims and objectives of the Telematics Institute. The specific objectives of the project are:
These objectives will be realised by the two main activities of the project, which will be carried out in a closely interwoven fashion:
The proposed project is of direct relevance for all the TI research clusters that involve the technical design of (parts of) telematic systems. Effective validation methods and tools are a strategic asset for the fast development of reliable telematic systems. The project contributes concretely to the knowledge and technology transfer to industry in the area of state-of-the-art validation techniques. Moreover, it promotes the international visibility of the TI as a centre of expertise for system validation. Finally, the project is set up in such a way as to promote contract research in system validation.
Telematic systems provide increasingly critical services to their users. This makes the reliability of their functioning a key issue of their design. Because of the usually great complexity of such systems and the pressure to reduce their design and development time ("time to market"), the use of advanced techniques and tools to check the validity of designs is of increasing importance. Validation issues are very relevant for most of the TI research clusters (see Scientific Programme for the first cycle, clusters 1-6, cf. e.g. section 1.3.2).
In spite of the importance of the use of advanced validation techniques, many projects pay only lip service to the systematic validation of their designs. An important reason is that the availability of practically usable validation techniques is relatively new, and that experts that know how to develop and apply them are scarce. The use of advanced validation techniques, moreover, is quite sophisticated and often requires specialist knowledge and skills that are unrelated to the application domain of the system itself. The above arguments suggest that the TI should concentrate its validation efforts and expand its expertise in this area, making it available to client projects within and outside TI, instead of following a policy where validation efforts are just scattered over many different projects.
Internationally leading research groups in system validation are active within the TI consortium. The Formal Methods and Tools Group of the CTIT and the SEN2 Theme of CWI have a worldwide reputation in the development and practical application of formal methods and tools for system specification and validation. In recent years practical validation has been carried out in collaboration with external partners such as Philips, Bosch-Blaupunkt, Nederlandse Spoorwegen, CMG, Rijkswaterstaat and others on real-life systems for car automation, consumer electronics, control systems, railway safety, etc. The activities of the Telematics Institute in the TESTBED project have lead to the creation of an advanced architecture and software environment for the integration of software tools, which can be used for the creation of integrated validation environments. Members of the TESTBED Tools team have extensive experience in software tool development and integration.
Given the experience, past achievements and mixture of common and complementary skills, these groups and their expertise form an excellent starting point for the proposed project. Nevertheless, it is intended that the project is based on an open consortium, where efforts can be strengthened further by the involvement of other groups, now or later.
Based on the use of formal methods a host of techniques to analyse system behaviour has been developed in the last two decades. Using constructive formal description techniques system designs can be defined in terms of precise and unambiguous specifications that provide the basis for systematic analysis. Tool environments consisting of static semantic checkers and animators allowing system simulation can be used for a quick assessment of the quality of specifications.
Based on the specifications of the system behaviour models can be obtained that can be systematically checked with respect to given properties. Such properties must be formalised in an appropriate logical language, usually in a modal logic involving special operators that represent temporal and/or spatial aspects of system behaviour. This technique of so-called model checking is one of the more effective validation methods currently available. One of the advantages of this approach is that implementations can also be verified against partial specifications, by considering only a subset of all specification requirements. This allows for increased efficiency by checking correctness with respect to only the most relevant requirements that should be fulfilled.
Verification is the mathematical demonstration of the correctness of the behaviour of an implementation model with respect to its specification. In priciple, this yields a complete analysis of the system implementation. In some cases such proofs can be carried out by computer, using appropriate algorithms, otherwise proofs are by hand, using an adequate proof methodology. In the latter case, to overcome human fallibility, techniques have been developed to check the proofs by computer. These proof checking techniques are difficult and labour intensive to apply, but when succesful give an unparalleled level of precision and reliability of the proofs.
Testing is a technique that can be applied both to prototypes, in the form of a systematic simulation, and to final products. Two basic approaches are transparent box testing, where the internal structure of an implemention can be observed (e.g. in the form of the source code), and black box testing, where only the communication between the system under test and its environment can be observed. In reality, testing is mostly somewhere in beteen these two extremes, which is referred to as so-called grey box testing. Testing is the validation technique that is most used in practice, but almost exclusively on the basis of informal and heuristical methods. Such procedures can be improved significantly by the use of formal methods, especially in the area of the (semi-)automatic generation and selection of test cases from formal specifications.
Current methods and technology provide the means for the specification, static analysis and simulation of large real-life systems. Systematic testing and model checking, however, is generally limited to systems with state spaces of the magnitude of 108 states. Verification based on syntactic manipulation of specifications is not restricted by the size of the state space, but by the size of the specification itself. Proving properties by hand is limited to system descriptions that do not extend beyond a few pages. Although this limits the scope of the current validation techniques, they are being applied sucessfully in practical validation studies for real-life applications. Success stories involving the groups from CTIT and CWI include the validation of the BOS control system of the storm surge barrier in the Nieuwe Waterweg, and the analysis of a much implemented sliding window protocol specification. In both cases unknown, but fatal flaws were exposed.
In spite of these and many other succesful applications of validation techniques, many systems still fall outside the scope of the current methods and tools. Even small to medium size systems can have state spaces in the order of 101000 states. Most of these systems will also require more than a few pages of specification. An important goal of the project, therefore, is intended to work towards closing the gap between the current limits of technology and the size of `real' systems.
One way to handle very large state spaces is to encode them efficiently in propositional logic (e.g. the reachability of a state can be encoded in one bit). Such encoding can be further optimised by the sharing of common code fragments using Binary Decision Diagrams (BDDs) and related techniques. The BDD-literature features claims of state spaces of up to 101300 states that have been analysed in this way. Based on completely different propositional techniques the prover HeerHugo has been used at CWI to show the safety of railway guarding systems with at least 101000 states.
Experience indicates that individual characteristics of systems are very important for the success of these methods. There are examples where BDDs are ineffective, whereas provers like HeerHugo are successful, and vice versa. There are even examples were propositional encoding performs worse than explicit state space analysis. As yet, there is little understanding of how the success of the different methods depends on the characteristics of systems at hand, and indeed of the relative power of the different propositional encoding techniques.
A second approach towards a more efficient representation of state spaces is the use of so-called true concurrency models. The state space explosion problem typically exists with systems that consist of many components running in parallel. Most behavioural models use a so-called interleaving semantics that creates global states by all possible cominations of local states. This makes state spaces grow with the product of the sizes of the local state spaces. In true concurrency models information is defined in terms of the causal dependencies between system actions. The complexity of this information grows only with the sum of that of the components.
Yet another approach is to somehow reduce the state spaces that must be generated or maintained. Typical examples are on-the-fly state space generation (generating only those parts of the state space that are really necessary at a given moment), and partial order reductioen techniques (creating only one instead of all interleavings of independent local behaviours). Still another technique is that of state abstraction (representing many states by one), which can be used to suppress irrelevant information for the validation property in question. A typical example is to replace unimportant system components by single state systems.
The effects of the use of such techniques can be substantially amplified if the models used are compositional with respect to common operators for system synthesis, e.g. parallel composition. This means that the model (properties) of the entire system is (are) functions of the model (properties) of its components. In the common interleaving approach the global state space reduction is the product of the local reductions, resulting in a reduction that is exponential in the number of components.
A completely different approach is to exploit the regularity of the syntactic structure of specifications in order to reduce complexity. In many (protocol) systems the ultimate state space can be huge, whereas the internal structure may be quite simple, or be transformed int=o something with a highly regular structure. This can be exploited to reduce the size of the generated state spaces quite drastically, or even avoid the generation of large semantic models altogether.
An important technique in this vein is the symbolic treatment of data values. Large domains of data types can cause arbitrary blow-up of state spaces, whereas the qualitative behaviour of systems depends only on the effect that data values have on conditional control structures. This leads to equivalence classes of data values, where the number of classes can be small even if the domains are infinite. Many problems can be solved by representing the classes instead of the data values. Moreover, the classes can be constructed using on-the-fly methods. This approach is followed in many simulators, e.g. the LOTOS simulator Smile.
An entirely different way to keep the representation of large state spaces small is by encoding the states by data parameters, and characterizing the behaviour of the system by condition-effect functions. The descriptions obtained in this way are referred to as linear process equations. They yield descriptions with a very regular structure. Such techniques can be applied to process algebraic specification formalisms that include features for the specification of data, such as muCRL and LOTOS. First studies on transforming arbitrary muCRL process descriptions, including parallelism, into this form are promising in the sense that medium-sized systems can be easily transformed into this format.
For checking the behavioural equivalence between specifications and implementations of systems the so-called cones and foci proof system has been developed. If specification and implementation are both given as linear process operators, the problem is reduced to six classes of identities on data parameters. Although these identities are not always easy to prove, they do capture the essence of the distributed system under study. Moreover, they lead the way to automatic assistance, for instance by generating these classes of identities automatically, and by solving at least the trivial ones automatically.
The same principles may be exploited for checking logical requirements. Current research aims to develop proof systems for modal formulas with data on linear process operators. After identifying the basic rules, it is intended to develop a proof methodology, in the same style as the cones and foci method to make proving such modal formulas efficient.
Current process description languages, such as LOTOS, SDL, muCRL, CHILL, Erlang and Promela have been designed to model concurrent, communicating systems with discrete events. They are much less suited, however, for the specification and, in particular, the analysis of quantitative features such as timed, probabilistic, and stochastic behaviour.
Networking systems and telematics applications, however, crucially depend on the correctness of their quantitative behaviour, which is usually encoded in their Quality of Service (QoS) parameters. Typical examples are such measures as maximal delay, throughput, jitter, failure ratios, etc. In modern network and multimedia systems the difference between functional and quantitative properties has become blurred. This also holds for the class of so-called embedded systems , which is rapidly gaining importance for the modelling of all sorts of (safety) critical applications.
During the past decade a lot of research has been carried out in the area of timed specification languages. One of the most significant developments has been the introduction of the timed automata model. In this model infinite state spaces due to continuous time parameters can, under reasonable conditions, be reduced to a finite set of region graphs allowing algorithmic property checking. In this way, although still for relatively small systems, modal properties and trace inclusions can be checked.
Compared to timed systems much less research has been devoted as yet to the development, and especially the application of probabilistic systems (where nondeterministic choices are resolved with specified probabilities) and stochastic systems (where timed action occurrences are determined by probability distributions). In the performance analysis community research into so-called stochastic Petri nets has some tradition. In search of a compositional theory of stochastic systems also research in the area of Stochastic Process Algebras (SPA) has started to flourish. The potential benefits of these models is great, as they can be used for the structured specification and analysis of functional and stochastic system features in one model. Process algebraic models can be related to both labelled transitions systems, being the basic model of functional behaviour, as well as to well-know analytic and simulation models of stochastic behaviour such as (generalised) (semi-)Markov processes. Although application examples here are still of small systems only, much progress is expected here, because of the underlying compositional model.
Research into the systematic improvement of validation methods and tools can profit from the substantial progress that is being made in the very active international validation research scene. We will focus on two main research tasks that are highly relevant for the validation of telematics systems.
The approaches of task A to overcome the gap can be divided into three main classes.
Task B aims at the extension of existing models of behaviour to accommodate the specification and analysis of quantitative features.
An important criterion for the success of the technology developed within the SVC will be its potential for effective applicability. This requires the organisation of feedback through well-chosen application case studies. However, the usefulness of certain methods and tools (or the lack of it) must also be communicated to the intended users if the validation technology, so that in the longer term validation technology can be transferred from academia to industry. Thus the application-oriented tasks of the SVC are:
The application of methods and tools to realistic case studies. The case studies will be evaluated to guide subsequent research and development, improve methods and tools, and to obtain a better understanding of the ranges of applicability of the various methods and tools. This activity also entails the upgrading of methods and tools to improve their practical usability, both as a preparation for case studies, and as a result of application experience.
Evaluation of a general set of state-of-the-art techniques and tools, both by pro-active studies and by the analysis of case studies and application projects. It will be important to establish which problems can, and which cannot be tackled using given methods and techniques, and under what conditions. How should be they be incorporated in industrial design practice, and what additional constraints should then be fulfilled? What factors determine the success or failure of a certain method? In what way should the technique be extended, embedded or adopted? Systematically analysing case studies that have been carried out ensures that we learn from our experience and can assess the possibilities, costs, bottlenecks, etc. on of new projects that involve the use of validation methods more easily.
To function properly this task must include the maintenance of a working set of methods, techniques, and tools, and awareness of validation projects and developments elsewhere.
As was already emphasised, feedback from practical applications is essential for the fundamental research, improving quality as well as ensuring relevance. We intend to validate selected systems taken from realistic application contexts in order to obtain information about the applicability of the various methods and tools. The feedback will be fourfold, viz.
To provide the project with a smooth start, initially, the applications studies used for assessment will be taken from current validation efforts by the participants. These include:
With the aid of the steering committee continuously new case studies will be selected for this project activity. These case studies must be well-chosen, in the sense that they both address realistic validation issues, and have the potential to provide the project with the desired feedback. Current experience of the proposers indicates that the potential number of such case studies is large and increasing. The steering committee members will play an active role in the identification of the case studies, where in addition to their technical expertise they serve as a contact person for their own organisations as well as industry at large.
It is intended that in the course of the project larger application projects can be started as spin-off activities of the core project, in the form of validation studies for industry as related, but independent contract research, culminating in a basic service offered by the Telematics Institute. In the longer run, such contract research is preferred over regular case studies, as it implies a greater commitment of companies and will therefore most likely have a greater impact.
At the time of writing the proposers are involved in discussions concerning applications that could be used as application case studies or even validation contract research. They include projects on the effect of adding additional features in the CHIPPER (KPN), on the verification of railway control systems (EURIS-project: NS-RIB, NS-HR, CMG), BOS script validation (CMG, Rijkswaterstaat), and functional correctness of mobile systems (Lucent).
Transferring knowledge of validation methods and tools is an explicit objective of the SVC proposal. In part this objective is achieved by the execution of the application case studies and validation contract research in cooperation with industry. Also the steering committee will serve as a means of communication between the project and industry. In addition to these natural channels, the project will also include other, explicit activities to improve knowledge transfer.
Effective tool support is essential for the acceptance of formal methods. An architectural framework must be developed for the integration of different tools and techniques in a way that allows designers to use them effectively without being distracted unnecessarily by details of the technique or tool in question. The tool bench currently being developed in the Testbed project is an example of such a framework. A similar architecture could serve as a framework for mass customisation of validation methods within a common environment.
Although the fundamental development and/or extension of methods and tools is part of tasks A and B, additional effort will usually be required to tail or them to the needs of the application case in question, and to improve general usability in terms of performance, reliability, user-interfacing, and documentation. It is not within the scope of the project to develop full-fledged industrial strength tools, but reasonable efforts must be made to bring especially prototype tools to a sufficient level of user-friendliness.
Two activities that we want to mention here specifically, and are motivated by past application experience, are:
This page was last updated by Holger Hermanns on 2000-03-20