First International Workshop on Software Quality (SOQUA 2004)
in conjunction with the Net.ObjectDays 2004
Fair and Convention Center, Erfurt, Germany, September 30, 2004
Invited Talks | Contributed Talks
Invited Talks

Measuring the Effectiveness of Software Testing
 
Harry M. Sneed (AneCon GmbH, Vienna, Austria)

Abstract. In 1978 Harry Sneed set up the first commercial software test laboratory in Budapest charging DM 75,- per test case and DM 100,- for each error found. The laboratory was used to test the Integrated Transport Steuerung system of the German railroad and the BS2000 operating system of Siemens. Today, some 26 years later, managers are looking for a means to justify the cost of testing. While working as a test consultant for a Viennese software house from 1998 until 2003, Harry Sneed conceived a set of metrics for measuring the effectiveness of the test operations there. These metrics were intended to measure the performance of the test department, but they are equally valid for measuring test operations anywhere. In fact, with these metrics it should be possible to convert software testing from an art as perceived by Glenford Meyers in 1975 to a science as defined by Lord Kelvin in 1875. The metrics were obtained using the Goal/Question/Metric Method of Basili and Rombach and were refined through three years of practical application. In effect, they are a continuation of the test measurement work Sneed began as a young test entrepreneur in 1978. They are supported by a set of tools designed for both static and dynamic analysis as well as for evaluating the results of both. Working as a test team leader at the Wirtschaftskammer in Vienna, Sneed applied these metrics to successfully predict the test effort required to test a complex web application. From this presentation the attendants will be exposed to the experience of 30 years of software testing.

Slides: [PDF] [PPT]
Demo Files: [ZIP]


Testing in the Component Age
 
Mario Winter (University of Applied Sciences Cologne, Germany)

Abstract. Since the disadvantages of object-oriented approaches regarding quality, especially the reusability of the resulting software, became evident in the mid 90s, component-oriented software development is very popular nowadays. Nevertheless, testing of component-oriented software--just as testing of object-oriented software at the beginning of the 90s--has not been regarded for a long while.
 
This contribution describes the differences between object-oriented and component-oriented software. Furthermore, the different roles within the testing process are specified and possible forms of specification-based test of components are outlined. Especially, the important question--in the context of component-based software--how components whose interfaces are specified through contracts is addressed. Notes on corresponding test tools top this contribution off.

Slides: [PDF]

Contributed Talks

Inspections in Small Projects
 
Juha Iisakka (University of Oulu, Finland)

Abstract. Practically all inspection and review methods focus on projects for developing new software that involve numerous people, but software companies also have small projects on which only a few people are working, e.g. maintenance-oriented changes to existing software systems. Unfortunately, these small projects do not necessarily have the power to implement inspections efficiently. This paper raises certain problems that small projects have with inspections and discusses how different forms of inspection are suitable for small projects.

Slides: [PDF]


Generic Environment for Full Automation of Benchmarking
 
Tomás Kalibera (Charles University, Prague, Czech Republic)
Lubomír Bulej (Charles University, Prague, Czech Republic; Czech Academy of Sciences, Prague, Czech Republic)
Petr Tuma (Charles University, Prague, Czech Republic)

Abstract. Regression testing is an important part of software quality assurance. We work to extend regression testing to include regression benchmarking, which applies benchmarking to detect regressions in performance. Given the specific requirements of regression benchmarking, many contemporary benchmarks are not directly usable in regression benchmarking. To overcome this, we present a case for designing a generic benchmarking environment that will facilitate the use of contemporary benchmarks in regression benchmarking, analyze the requirements and propose architecture of such an environment.

Slides: [PDF] [PPT]


Property-Oriented Testing: An Approach to Focusing Testing Effort on Behaviors of Interest
 
Shuhao Li (Changsha Institute of Technology, China)
ZhiChang Qi (Changsha Institute of Technology, China)

Abstract. The behaviors of reactive systems are characterized by events, conditions, actions, and information flows. Complex reactive systems further exhibit hierarchy and concurrency. Since there usually exist numerous behaviors in such systems, they can hardly receive both comprehensive and in-depth testing. This paper presents a property-oriented testing method for reactive systems. UML state machine is employed to model the system under test (SUT) and temporal logic is used to specify the property to be tested. Then targeted test sequences are derived from the model according to the given property. Using this approach, usually only a small portion of the total behaviors needs to be tested. This method suits well the occasions when the testers have to focus on only critical properties of the SUT in case limited project budget is available.


Test Oracles Using Statistical Methods
 
Johannes Mayer (University of Ulm, Germany)
Ralph Guderlei (University of Ulm, Germany)

Abstract. The oracle problem is addressed for random testing and testing of randomized software. The presented Statistical Oracle is a Heuristic Oracle using statistical methods, especially statistical tests. The Statistical Oracle is applicable in case there are explicit formulae for the mean, the distribution, and so on, of characteristics computable from the test result. However, the present paper only deals with the mean. As with the Heuristic Oracle, the decision of the Statistical Oracle is not always correct. An example from image analysis is shown, where the Statistical Oracle has successfully been applied.

Slides: [PDF]


Assessing and Interpreting Object-Oriented Software Complexity with Structured and Independent Metrics
 
Roland Neumann (Hasso-Plattner-Institute for Software Systems Engineering GmbH at the University of Potsdam, Germany)
Dennis Klemann (Hasso-Plattner-Institute for Software Systems Engineering GmbH at the University of Potsdam, Germany)

Abstract. Object-oriented software complexity is difficult to assess due to its manifold influences from cognition science or algorithmic complexity theory. A practical process for a structured complexity assessment is presented in this paper. It starts with considerations for measurement and data preparation. Using mathematical transformation techniques, independent complexity metrics are gained. With these results, complexity aspects of a software system can be defined. This makes a complexity comparison through system classes possible, which helps getting an overview on large systems. These process steps are then illustrated with an industrial example.

Slides: [PDF]


Cate: A System for Analysis and Test of Java Card Applications
 
Peter Pfahler (Universität Paderborn, Germany)
Jürgen Günther (ORGA Kartensysteme GmbH, Paderborn, Germany)

Abstract. Cate is a domain-specific testing environment. It integrates both static and dynamic analyzes that are designed for Java Card application software. Cate supports the test process by analyzing the command/response behavior of the software, by performing test coverage analysis and by providing tools to visualize the analysis results. This paper gives a concise overview over the system which is successfully employed in the area of smart card development for mobile phones.

Slides: [PDF]


Experience-Based Refactoring for Goal-Oriented Software Quality Improvement
 
Jörg Rech (Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany)
Eric Ras (Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany)
Andreas Jedlitschka (Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany)

Abstract. In agile software development refactoring is an important phase for the continuous improvement of software quality. Unfortunately, the application of refactorings is very subjective and heavily based on the expertise of the developers resulting in an unstable quality assurance. In this paper, we present an experience-based approach for the semi-automatic and goal-oriented refactoring of software systems based on didactical augmented experiences, following the experience factory paradigm. This approach promises the accelerated acquisition, (re-)use, and learning of knowledge in the refactoring process.

Slides: [PDF] [PPT]


SIP Robustness Testing for Large-Scale Use
 
Christian Wieser (University of Oulu, Finland)
Marko Laakso (University of Oulu, Finland)
Henning Schulzrinne (Columbia University, New York, USA)

Abstract. The Session Initiation Protocol (SIP) is a signaling protocol for Internet telephony, multimedia conferencing and instant messaging. We describe a method for assessing the robustness of SIP implementation by means of a tool that detects vulnerabilities. We prepared the test material and carried out the tests against a sample set of existing implementations. Many of the implementations available failed to perform in a robust manner under the test. Some failures had information security implications and should hence be considered as vulnerabilities. The results were reported to the respective vendors and, after a grace period, the test suite is now publicly available. By releasing the test material to the public, we hope to contribute to more robust SIP implementations.

Slides: [PDF]


Johannes Mayer, 2004-10-06