A joint project of the Graduate School, Peabody College, and the Jean & Alexander Heard Library

Title page for ETD etd-11222018-050053


Type of Document Dissertation
Author Bao, Shunxing
Author's Email Address shunxing.bao@gmail.com
URN etd-11222018-050053
Title Algorithmic Enhancements to Data Colocation Grid Frameworks for Big Data Medical Image Processing
Degree PhD
Department Computer Science
Advisory Committee
Advisor Name Title
Bennett A. Landman Committee Chair
Aniruddha Gokhale Committee Co-Chair
Alan Tackett Committee Member
Douglas C. Schmidt Committee Member
Hongyang Sun Committee Member
Keywords
  • cloud computing
  • grid computing
  • medical image processing
  • Apache Hadoop ecosystem
  • Big data infrastructure
Date of Defense 2018-09-12
Availability unrestricted
Abstract
Large-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., Network file system file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-based approaches reveal that performance is impeded by standard network switches since typical processing can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based big data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise.

Despite this promise, our studies have revealed that existing big data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBase’s data distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). Big data medical image processing applications involving multi-stage analysis often exhibit significant variability in processing times ranging from a few seconds to several days. Due to the sequential nature of executing the analysis stages by traditional software technologies and

platforms, any errors in the pipeline are only detected at the later stages despite the sources of errors predominantly being the highly compute-intensive first stage. This wastes precious computing resources and incurs prohibitively higher costs for re-executing the application. To address these challenges, this research propose a framework - Hadoop & HBase for Medical Image Processing HadoopBase-MIP) - which develops a range of performance optimization algorithms and employs a number of system behaviors modeling for data storage, data access, and data processing. We also introduce how to build up prototypes to help empirical system behaviors verification. Furthermore, we introduce a discovery with the development of HadoopBase-MIP about a new type of contrast for medical imaging deep brain structure enhancement. And finally, we show how to move forward the Hadoop based framework design into a commercialized big data / High performance computing cluster with a cheap, scalable and geographically distributed file system.

Files
  Filename       Size       Approximate Download Time (Hours:Minutes:Seconds) 
 
 28.8 Modem   56K Modem   ISDN (64 Kb)   ISDN (128 Kb)   Higher-speed Access 
  bao.pdf 9.24 Mb 00:42:46 00:21:59 00:19:14 00:09:37 00:00:49

Browse All Available ETDs by ( Author | Department )

If you have more questions or technical problems, please Contact LITS.