A High Performance Photogrammetry for Academic Research

Black line drawing of a stack of papers on a white circle inside a dark green square.

Citation

Guangchen Ruan, Eric Wernert, Tassie Gniady, Esen Tuna and William Sherman, “High Performance Photogrammetry for Academic Research”, to appear in Proceedings of PEARC’18: Practice and Experience in Advanced Research Computing. Pittsburgh, PA, July 22-26, 2018.

Description

Photogrammetry is the process of computationally extracting a three-dimensional surface model from a set of two-dimensional photographs of an object or environment. It is used to build models of everything from terrains to statues to ancient artifacts. In the past, the computational process was done on powerful PCs and could take weeks for large datasets. Even relatively small objects often required many hours of compute time to stitch together. With the availability of parallel processing options in the latest release of state-of-the-art photogrammetry software, it is possible to leverage the power of high performance computing systems on large datasets. In this paper we present a particular implementation of a high performance photogrammetry service. Though the service is currently based on a specific software package (Agisoft's PhotoScan), our system architecture is designed around a general photogrammetry process that can be easily adapted to leverage other photogrammetry tools. In addition, we report on an extensive performance study that measured the relative impacts of dataset size, software quality settings, and processing cluster size. Furthermore, we share lessons learned that are useful to system administrators looking to establish a similar service, and we describe the user-facing support components that are crucial for the success of the service.

Date

July 2018

Type

Conference Paper