• Ghader Kurdi

Student thesis: Phd


Multiple choice questions (MCQs) are used ubiquitously; they are part of low stake, high stake, paper-and-pencil as well as computerised examinations. Constructing high-quality MCQs, however, is a challenging task that requires time and training. Indeed, there is a large number of low quality MCQs used in formal examinations demonstrating the challenges in constructing them. It also raises concerns about the effect of using these MCQs on students' performance and learning experience, and on those crucial decisions made based on their results (e.g. awarding qualifications to practice medicine). Adding to the challenge, a large number of MCQs are needed for examinations and for other instructional activities and, to maintain their validity, those MCQs cannot be consistently reused. The challenging task of MCQ construction can be facilitated by automation. To that end, ontologies have been successfully used for automatically generating MCQs. However, the majority of generated MCQs are simple, consisting of few terms, and testing recall of information. Therefore, there is a need for improving the coverage by including complex MCQs that consist of multiple terms and that invoke other cognitive processes. In this thesis, we investigate the generation of good quality, multi-term MCQs from ontologies. Specifically, we focus on generating medical, case-based questions (CBQs), which we have shown experimentally to be successful (with about 80\% appropriate questions). Since question difficulty is a core property of questions that need to be known prior to their administration, we also investigate controlling the difficulty of auto-generated CBQs in this thesis. A difficulty measure we developed has outperformed the baseline difficulty measure and has been of comparable performance to that of domain experts. Finally, to reduce the cost of creating or extending ontologies for question generation, which hinders the adoption of ontology-based question generation approaches in practice, we propose the use of existing, human-authored questions for targeted enrichment of medical ontologies with relations. For this purpose, we analysed a corpus of human-authored CBQs and identified two challenges related to relation extraction from these questions, namely: resolving non-standard coreference and extracting relations with implicit arguments. We then investigate whether incorporating knowledge about question structure contributes to overcoming these challenges through building a prototype for a question-sensitive relation extractor. The results demonstrate the usefulness of incorporating knowledge about question structure and suggest that this knowledge would improve the performance of text mining tools that aim to process similar questions.
Date of Award1 Aug 2020
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorUli Sattler (Supervisor) & Bijan Parsia (Supervisor)


  • relation extraction
  • text mining
  • ontology
  • semantic web
  • case-based questions
  • MCQs
  • multiple choice questions
  • assessment

Cite this