Stop agonising over informed consent when researchers use crowdsourcing platforms to conduct survey research

Jonathan Lewis, Vilius Dranseika, Søren Holm

Research output: Contribution to journalEditorialpeer-review

Abstract

Research ethics committees and institutional review boards spend considerable time developing, scrutinising, and revising specific consent processes and materials for survey-based studies conducted on crowdsourcing and online recruitment platforms such as MTurk and Prolific. However, there is evidence to suggest that many users of ICT services do not read the information provided as part of the consent process and they habitually provide or refuse their consent without adequate reflection. In principle, these practices call into question the validity of their consent. In this paper we argue that although the ‘no read problem’ and the routinisation of consent may apply to research participants’ consent practices for studies on crowdsourcing platforms, this is not a serious problem. Furthermore, given that the informational requirements for informed consent in these contexts are minimal, we argue that these participants are, nevertheless, sufficiently informed to give valid consent. We conclude that research ethics committees and institutional review boards should only agonise over the precise details of the informed consent process and materials in those rare cases where appreciable risks to research participants need to be managed.
Original languageEnglish
Pages (from-to)343-346
Number of pages4
JournalClinical Ethics
Volume18
Issue number4
Early online date3 Nov 2023
DOIs
Publication statusPublished - 1 Dec 2023

Keywords

  • surveys
  • Prolific
  • MTurk
  • consent
  • routinisation
  • Research Ethics
  • Informed Consent
  • no read problem
  • reading speed

Fingerprint

Dive into the research topics of 'Stop agonising over informed consent when researchers use crowdsourcing platforms to conduct survey research'. Together they form a unique fingerprint.

Cite this