Source Selection Languages: A Usability Evaluation

Ixent Galpin, Edward Abel, Norman W. Paton

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

125 Downloads (Pure)


When looking to obtain insights from data, and given numerous possible data sources, there are certain quality criteria that retrieved data from selected sources should exhibit so as to be most fit-for-purpose. An effective source selection algorithm can only provide good results in practice if the requirements of the user have been suitably captured, and therefore, an important consideration is how users can effectively express their requirements.
In this paper, we carry out an experiment to compare user performance in two different languages for expressing user requirements in terms of data quality characteristics, pairwise comparison of criteria values, and single objective constrained optimization. We employ crowdsourcing to evaluate, for a set of tasks, user ability to choose effective formulations in each language. The results of this initial study show that users were able to determine more effective formulations for the tasks using pairwise comparisons. Furthermore, it was found that users tend to express a preference for one language over the other, although it was not necessarily the language that they performed best in.
Original languageEnglish
Title of host publicationHILDA'18 Proceedings of the Workshop on Human-In-the-Loop Data Analytics
Publication statusPublished - 2018
Eventthe Workshop - Houston, TX, USA
Duration: 10 Jun 201810 Jun 2018


Conferencethe Workshop


Dive into the research topics of 'Source Selection Languages: A Usability Evaluation'. Together they form a unique fingerprint.

Cite this