Changes, Evolution and Bugs – Recommendation Systems for Issue Management

contColl

Have this bug – I totally recommend it!

Recommendation systems are everywhere nowadays. Thanks to Amazon, Netflix, IMDB, Last.fm and all the others, we intuitively know how to interact with them. There is a growing interest also in recommendation systems for software engineering (RSSE). In this book chapter we introduce RSSEs for issue management, focusing on tool support for bug duplicate detection and change impact analysis. Regarding the latter, we provide a side-by-side comparison of the two tools Hipikat and ImpRec.

Recommendation systems are everywhere on the web, helping users filter information in domains such as music, news, and books. Driven by successful applications in e-commerce, they have become mature, and found their way also more specialized domains – including software engineering. However, you cannot expect to simply transfer the same recommendation engine from a web shop to a software engineering project, the contexts are much too different.

This publication came about when we found a call for chapters for the first book on RSSEs – edited by Robillard, Maalej, Walker, and Zimmermann. The timing was perfect, they called for applications of RSSEs and I needed somewhere to publish details about my RSSE ImpRec – a recommendation system for change impact analysis. To provide some more meat, we broadened the scope to also cover bug duplicate detection. Also, to make the presentation of ImpRec more interesting, we provided a side-by-side comparison to Cubranic’s RSSE Hipikat.

recommendation systems – the basics

There are two basic approaches to recommendation systems and RSSEs, as shown on top of this post. First, in a content-based RSSE you generate recommendations based on similarities among elements. To get good output you need large amounts of data that can be characterized in meaningful ways. For books and movies you can look at metadata such as genre, author, country, year and so on – for software artifacts it is less clear, but we show how to use metadata in bug trackers. Second, in a collaborative RSSE, you depend on a large user base. By identifying users similar to yourself, the RSSE can generate recommendations based on what others have “liked”. Unfortunately, unlike for books and movies, there is no rating system in a software engineering project – instead we have to rely on how often a user interacted with a particular software artifact. Finally, it is of course possible to combine content-based and collaborative techniques into hybrid systems.

Recommending bug duplicates

The first part of the book presents a literature review on bug duplicate detection, a topic pioneered by my co-author Per. The figure below shows the two main approaches to recommending potential duplicates: information retrieval and supervised learning. In both cases the natural language text of the bug description is the fundamental part, but often it is complemented by more metadata from bug trackers. No deep learning around, good results rely on careful feature engineering. Take a look at the preprint for an overview of past work, including their empirical results.

RSSE14_Borg_DuplDet
Two approaches to recommend bug duplicates: information retrieval and supervised learning.

Recommending software artifacts in evolving software

The second part of the book chapter presents Hipikat and ImpRec side-by-side – two RSSEs helping developers navigate large software projects. The first thing we did was to come up with the four-step model presented below, i.e., the four steps an RSSE for navigation support needs to do: 1) model the information space, 2) populate the model, 3) calculate recommendations when the user does a particular task, and 4) present the recommendations. I really think the four steps provide a good structure when comparing this kind of RSSEs – it can probably be used again in the future.

RSSE14_Borg_NavSteps
Four steps of an RSSE supporting navigation in software evolution.

Several aspects of Hipikat and ImpRec are similar – not so strange as they both can be described by the model above. I actually didn’t know anything about the internals of Hipikat when I implemented ImpRec, but interestingly enough several approaches ended up pretty similar. Examples include: 1) a semantic network of artifacts and relations, 2) mining software repositories as the method to capture important relations from the project history, and 3)
trace recovery between issue reports based on textual similarity. I guess the similarities are a kind of external validation of the ideas by Cubranic and myself =) The main differences between Hipikat and ImpRec are rather in the intended use. Hipikat was made to support project newcomers navigate open source software projects. ImpRec on the other hand, I developed to support change impact analysis in a specific industrial context – a highly tailored solution.

Primary source for ImpRec details

The book chapter is also the main publication describing technical details of ImpRec. Some details have actually changed, as will be described in a future blog post, but in general – this is the primary source. In a nutshell, the idea behind ImpRec is that issue reports are “junctures” in the information landscape, connecting with requirements, test cases, pages in the user manual etc. The source code as well of course, but that is so far not within the ImpRec scope – we explicitly wanted to focus on all the rest. A first step when deploying ImpRec is to mine the software repositories to establish the network of software artifacts.

Using ImpRec is simple. The developer enters the textual description of a newly submitted issue, and ImpRec finds similar issues. We use Apace Lucene for this – fast, scalable, and simply fit for purpose. Then, by performing breadth-first searches in the knowledge repository, i.e., the network of software artifacts, ImpRec recommends other artifacts that could be relevant to the developer’s current task. ImpRec thus combines content-based filtering (textual data) and collaboratively created “paths in the information landscape” (previously created when developers performed standard maintenance/evolution activities).

RSSE14_Borg_ImpRecOverview
Overall picture of ImpRec. The issue tracker connects other databases through links from individual issues. ImpRec points the user to issues relevant for the current task.

A central part of the ImpRec implementation is the ranking function. Without an accurate artifact ranking, the recommendations will obviously be useless. Apache Lucene finds similar issue reports, i.e., starting points, breadth-first searches in the knowledge base identify potentially relevant artifacts (or candidate impact sets, as we primarily designed ImpRec to support change impact analysis). Note that a single artifact can be identified from multiple starting points, as is the case for Req A in the figure below. All potentially relevant artifacts are then merged to a final set, and then finally ranked to make sure the best recommendations end up on top of the list. The ranking considers both textual similarities, from how many starting points an artifact was found, the centrality of artifacts in the network (in practice really important, as found when using TuneR). There is also a penalizing effect when following too many links in the knowledge repository.

RSSE14_Borg_ImpRecInternals
The ImpRec ranking function in action. The examples show how two starting points, identified by Apache Lucene, leads to two sets of potentially relevant artifacts. The two sets are the merged to ranked recommendations.

Implications for research

  • A solid review of work on bug duplicate detection, covering both information retrieval and supervised learning.
  • A four-step model for describing and comparing RSSEs supporting navigation in evolving software systems.

Implications for practice

  • Aggregated evidence of approaches for bug duplicate detection – many issue trackers have this now, e.g., Bugzilla and HP ALM.
  • Value the previous work done by earlier developers, mine it to help future co-workers – as described in Hipikat and ImpRec.
Markus Borg and Per Runeson. Changes, Evolution and Bugs - Recommendation Systems for Issue Management, In Martin Robillard, Walid Maalej, Robert Walker, and Thomas Zimmermann (Eds.), Recommendation Systems in Software Engineering, pp. 407-509, Springer, 2014. (link, preprint)

Abstract

Changes in evolving software systems are often managed using an issue repository. This repository may contribute to information overload in an organization, but it may also help in navigating the software system. Software developers spend much effort on issue triage, a task in which the mere number of issue reports becomes a significant challenge. One specific difficulty is to determine whether a newly submitted issue report is a duplicate of an issue previously reported, if it contains complementary information related to a known issue, or if the issue report addresses something that has not been observed before. However, the large number of issue reports may also be used to help a developer to navigate the software development project to find related software artifacts, required both to understand the issue itself, and to analyze the impact of a possible issue resolution. This chapter presents recommendation systems that use information in issue repositories to support these two challenges, by supporting either duplicate detection of issue reports or navigation of artifacts in evolving software systems.