Testing is the most commonly used and most important activity in
software quality assurance, frequently accounting for over 50% of the
entire cost of software development. For large systems with complex
functionality and large input spaces, it is imperative to automate
software testing. Automation should include not only automated execution
of test cases, but also automatic test data selection and automatic
evaluation of test outputs. While there has been quite a lot of research
in the field of automatic test data generation, there is not so much
work regarding test oracles.
There has recently been increased activity in the field of test
evaluation. Several approaches have been proposed to overcome the oracle
problem, such as model-based test oracles, log file analysis,
metamorphic testing, symmetric testing, statistical hypothesis tests,
and many others. While these techniques are quite helpful, there remain
many open questions. Ideas have to be exchanged, new approaches have to
be proposed and evaluated, and problems have to be identified and
solved.
The goal of this workshop is to bring together researchers, engineers,
and practitioners to discuss and evaluate the latest challenges and
breakthroughs in the field of test evaluation and to identify future
trends and problems in this area. The combination of people from both
academia and industry is intended to foster a two-way flow of
information, to help make academic researchers aware of practical
problems from industry, and to facilitate the conversion of theoretical
research results into practical applications.
|