Rendering Massive Models

Author/Creator

Author/Creator ORCID

Date

2017-01-01

Department

Computer Science and Electrical Engineering

Program

Computer Science

Citation of Original Publication

Rights

This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.

Abstract

Whether in quest of increased visual realism in cinema, or in processing the latest scientific data sets, the sizes of models being rendered are ever increasing. Trends in recent films have models that exceed hundreds of millions of elements. Similarly, the datasets in scientific visualization are approaching an exabyte in size. As these trends continue, new methodologies will be required to render these sizes of datasets. In order to reduce the geometric complexity that is processed, many renderers employ a number of tricks to ensure that the number of individual objects fits within a 32-bit integer. The total number of elements generated for a typical complex scene can easily exceed the capacity of a 32-bit integer, but approaches that allow for memory reuse keeps an upper bound on the number of objects processed by the renderer. This dissertations presents methods for rendering of models and scenes that exceed the number of elements representable in an 32-bit integer and the memory size of any single processor. These contributions include new parallel algorithms for rendering large models, and a new stochastic data structure to reduce node-to-node communication during rendering.