New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music

Research output: Contribution to journalArticlepeer-review


Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice.
Original languageEnglish
Article number1384
Pages (from-to)1-42
Number of pages42
Issue number12
Early online date7 Dec 2020
Publication statusPublished - 7 Dec 2020


  • Gestural interfaces
  • Gesture representation
  • HCI
  • Interactive machine learning
  • Interactive music
  • Music composition
  • Myo
  • Optimisation
  • Performance gestures
  • Wekinator


Dive into the research topics of 'New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music'. Together they form a unique fingerprint.

Cite this