A metabolome pipeline: from concept to data to knowledge

Marie Brown, Warwick B. Dunn, David I. Ellis, Royston Goodacre, Julia Handl, Joshua D. Knowles, Steve O'Hagan, Irena Spasic, Douglas B. Kell

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Metabolomics, like other omics methods, produces huge datasets of biological variables, often accompanied by the necessary metadata. However, regardless of the form in which these are produced they are merely the ground substance for assisting us in answering biological questions. In this short tutorial review and position paper we seek to set out some of the elements of “best practice” in the optimal acquisition of such data, and in the means by which they may be turned into reliable knowledge. Many of these steps involve the solution of what amount to combinatorial optimization problems, and methods developed for these, especially those based on evolutionary computing, are proving valuable. This is done in terms of a “pipeline” that goes from the design of good experiments, through instrumental optimization, data storage and manipulation, the chemometric data processing methods in common use, and the necessary means of validation and cross-validation for giving conclusions that are credible and likely to be robust when applied in comparable circumstances to samples not used in their generation.
    Original languageEnglish
    Pages (from-to)39-51
    Number of pages13
    JournalMetabolomics
    Volume1
    Issue number1
    DOIs
    Publication statusPublished - Mar 2005

    Keywords

    • metabolomics
    • chemometrics
    • data processing
    • databases
    • machine learning
    • genetic algorithms
    • genetic programming
    • evolutionary computing

    Research Beacons, Institutes and Platforms

    • Manchester Institute of Biotechnology

    Fingerprint

    Dive into the research topics of 'A metabolome pipeline: from concept to data to knowledge'. Together they form a unique fingerprint.

    Cite this