Skip to main content
Thesis defences

PhD Oral Exam - Dong Jae Kim, Software Engineering

De-Mystifying Myths in Test Code Quality from Perspective of Test Code Design and Maintenance


Date & time
Monday, August 19, 2024
10 a.m. – 1 p.m.
Cost

This event is free

Organization

School of Graduate Studies

Contact

Nadeem Butt

Where

ER Building
2155 Guy St.
Room 1222

Wheel chair accessible

Yes

When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.

Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.

Abstract

Software testing is crucial for ensuring the reliability and robustness of software systems. It involves executing the software program and verifying it with a set of developer- defined verdict to identify and fix bugs that deviate from the verdict, aiming for fault-free software. Advances in technology have led to increased efforts to automate software testing process, where tests are created once and run repeatedly to detect regressions from new code changes.

While a plethora of research targets various aspects of test automation, such as auto- mated test prioritization, fault localization, and program repair, the design of tests remains an under-explored area. The software industry requires robust standards for test design and maintenance, which can significantly impact overall software quality. This thesis seeks to better understand how to improve the quality of test code through effective design and maintenance practices. To achieve this, we explore diverse aspects of test design using qualitative, quantitative, and automated approaches, aiming to de-mystify what makes the test code maintainable and extensible for developers.

The first part of this thesis examines test smells, a term coined to describe design issues that negatively impact the maintainability of test code. Since its inception, the concept of test smells has been widely accepted in academia, yet its applicability in industry remains largely unknown. Existing research into the applicability of test smells suggests conflicting evidence about their real negative impact on test maintainability. Inspired by this challenge, our first aim of the thesis is to clarify developers’ perceptions of test smells by empirically assessing how developers may address existing test smells in their test code, and their effect on defect-proneness by building a regression model. One major finding is that many proposed test smells persist, even as being removed as a by-product of code deletion and traditional source code refactoring activities, such as moving test code to improve cohesion. If test smell has an impact on software defect density, then removing test smell should indicate decrease defect density, yet our regression model indicates that the removal and addition of test smells have a negligible impact on defect densities. A significant contribution of this work is providing empirical support to re-rank current test smell catalogues.

With increasing efforts towards test automation to minimize testing costs, the software industry has begun to rely more heavily on modern software testing frameworks like JUnit and TestNG for testing Java-based system. More interestingly, these frameworks bring new set of paradigm known as annotation-driven development, which use test annotation to manage many crucial aspect of test execution life-cycle. Hence, our second thesis aim is to provide industrial guideline on how to write effective test cases using annotations by exploring test annotation maintenance activities in the open-source community. One major contribution is the development of catalogue discussing test annotation API usage and misuses (e.g., test smells), which is invaluable to guide development of automated tooling support for developers.

The goal of effective software testing is to identify the presence of bugs through thorough program execution and verification of verdicts set by developers. The first step towards achieving fault-free software is bug identification. However, once a test fails due to verdict violation, fixing the bug can be non-trivial due to flakiness, difficulty in exact fault localization, and repair mechanisms. Due to time constraints for continuous delivery, these tests may sometimes be temporarily disabled. To address this, modern testing frameworks provide annotation APIs like @Ignore to help developers bypass failures. However, such practices result in technical debt. In the third aim of our thesis, we investigate the origin and evolution of these test disabling practices to better guide developers in maintaining test code when test failures cannot be mitigated.

While reusability is essential for facilitating software development, enabling developers to extend code, promote reuse, and enhance maintainability, test code reusability is an underexplored area. Understanding test reusability is crucial given the significant time developers dedicate to testing. Therefore, in the final part of the thesis, our aim is to investigate reusability and extensibility in test code. Interestingly, we take a different view on reusability by examining paradigms like inheritance. We conjecture that while reusability improves ease of maintainability, there are hidden costs involved, which increase test case redundancies. Therefore, we propose a static and dynamic analysis technique to detect instances where test inheritance may introduce test case redundancies. One major finding is that reusability through inheritance is quite common, and the more complex the software systems, the more inheritance is used to ease test maintenance. However, our automated tool analysis detected that 18% of test cases become redundant when using inheritance. In summary, our research aims to demystify myths surrounding test quality, particularly in terms of test code design and maintainability. Our goal is to provide guidelines for developers, motivating them to extend our results into automated tool supports, equipping them with the necessary tools to create more maintainable tests for long-term improvement of software quality.

Back to top

© Concordia University