In recent years, mobile app reviews are known to provide a rich source of user feedback which is of great value for software evolution. However, the volume of such user reviews is huge, particularly for famous applications and large companies offering several applications. Addressing this issue, several automatic approaches are proposed recently for identifying useful reviews. The applied criteria for measuring the review usefulness in these approaches are originated from the few existing exploratory studies, wherein the usefulness of a review is interpreted as inclusion of requirement engineering related topics. Such interpretations of usefulness, however, is based on authorsâ understanding of usefulness rather than developersâ requirements. Ignoring developersâ viewpoint, the authors defined some usefulness metrics based on their own observations, and developed extraction approaches accordingly. Thus, expecting interesting results from such approaches for developers dealing with thousands of reviews daily is awkward. To bridge this gap in this study, related studies across several domains analysing human generated feedback, such as reviews, tweets, requirement notes, bug reports, and application testing reports, were perused to define a set of factors for accurately measuring the usefulness of user reviews. The usefulness factors were, then, validated in a focus group discussion session by experienced mobile app developers. Next, the task of extracting each of the approved factors was automated applying Deep Learning and Natural Language Processing (NLP) techniques. Finally, the models designed for extracting each factor were integrated to form a final system for automatically extracting useful reviews. Testing on different review datasets, the novel system achieved high accuracy (i.e., Aspects: 87%, Feature Requests: 72%, Issues: 67%, User Actions: 73%, and System Actions: 81%) and outperformed state-of-the-art extraction techniques. Moreover, unlike the state-of-the-art, the proposed system is completely aligned with developersâ viewpoint as it emphasises on developersâ approved factors for measuring the usefulness.
Date of Award | 1 Aug 2021 |
---|
Original language | English |
---|
Awarding Institution | - The University of Manchester
|
---|
Supervisor | Liping Zhao (Supervisor) & Goran Nenadic (Supervisor) |
---|
- User feedback
- App reviews
- Requirements engineering
- NLP
- Application development
- App store mining
- Neural Networks
An Automated System for Identification of Useful User Reviews for Mobile Application Development
Tavakoli, M. (Author). 1 Aug 2021
Student thesis: Phd