Projects per year
Abstract
The process of preparing potentially large and complex data sets for further analysis or manual examination is often called data wrangling. In classical warehousing environments, the steps in such a process are carried out using Extract-Transform-Load platforms, with significant manual involvement in specifying, configuring or tuning many of them. In typical big data applications, we need to ensure that all wrangling steps, including web extraction, selection, integration and cleaning, benefit from automation wherever possible. Towards this goal, in the paper we: (i) introduce a notion of data context, which associates portions of a target schema with extensional data of types that are commonly available; (ii) define a scalable methodology to bootstrap an end-to-end data wrangling process based on data profiling; (iii) describe how data context is used to inform automation in several steps within wrangling, specifically, matching, value format transformation, data repair, and mapping generation and selection to optimise the accuracy, consistency and relevance of the result; and (iv) we evaluate the approach with real estate data and financial data, showing substantial improvements in the results of automated wrangling.
Original language | English |
---|---|
Pages (from-to) | 1-1 |
Number of pages | 1 |
Journal | IEEE Transactions on Big Data |
Volume | 0 |
Issue number | 0 |
Early online date | 15 Apr 2019 |
DOIs | |
Publication status | E-pub ahead of print - 15 Apr 2019 |
Keywords
- Data Wrangling
- Data Matching
- Mapping Generation
- Data Transformation
- Data Cleaning
- Source Selection
Fingerprint
Dive into the research topics of 'Incorporating Data Context to Cost-Effectively Automate End-to-End Data Wrangling'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Value Added Data Systems: Principles and Architecture.
Paton, N. (PI), Fernandes, A. (CoI) & Keane, J. (CoI)
1/04/15 → 30/09/20
Project: Research