U-Compare: A modular NLP workflow construction and evaluation system

Y. Kano, M. Miwa, K. B. Cohen, L. E. Hunter, S. Ananiadou, J. Tsujii

    Research output: Contribution to journalArticlepeer-review


    During the development of natural language processing (NLP) applications, developers are often required to repeatedly perform certain tasks. Among these tasks, workflow comparison and evaluation are two of the most crucial because they help to discover the nature of NLP problems, which is important from both scientific and engineering perspectives. Although these tasks can potentially be automated, developers tend to perform them manually, repeatedly writing similar pieces of code. We developed tools to largely automate these subtasks. Promoting component reuse is another way to further increase NLP development efficiency. Building on the interoperability enhancing Unstructured Information Management Architecture (UIMA) framework, we have collected a large library of interoperable resources, developed several workflow creation utilities, added a customizable comparison and evaluation system, and built visualization utilities. These tools are modularly designed to accommodate various use cases and potential reuse scenarios. By integrating all these features into our U-Compare system, we hope to increase NLP developer efficiency. Simple to use and directly runnable from a web browser, U-Compare has already found uses in a range of applications. © 2011 by International Business Machines Corporation.
    Original languageEnglish
    JournalIBM Journal of Research and Development
    Issue number3
    Publication statusPublished - May 2011


    Dive into the research topics of 'U-Compare: A modular NLP workflow construction and evaluation system'. Together they form a unique fingerprint.

    Cite this