This is an old revision of this page, as edited by Flamurai (talk | contribs) at 08:00, 29 January 2005 (why it was developed; info about bucketing). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 08:00, 29 January 2005 by Flamurai (talk | contribs) (why it was developed; info about bucketing)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)Reyes rendering is a method used in 3D computer graphics to render an image. It was developed in 1987 at Pixar for PhotoRealistic RenderMan.
The algorithm was designed to overcome the speed and memory limitations of photorealistic algorithms, such as ray tracing, in use at the time. In fact, Reyes stands for Renders Everything You Ever Saw.
The Reyes algorithm introduced the concept of a micropolygon, which is a polygon that is at least as small as a pixel in the output image. These micropolygons are directly scan converted to produce the output image.
This method provides a fast, natural way of rendering curved surfaces, such as those represented by parametric patches. With scanline methods, the patches would have to be tesselated into polygons. Tessellation causes the surface to appear faceted. The only way to counter this effect is to introduce more polygons. By the time the surface appeared completely smooth, the number of polygons would drastically slow down the renderer and require a great deal of memory. On the other hand, ray tracing parametric surfaces is slow because calculating the intersection of a ray with a patch is difficult.
The basic Reyes pipeline has the following steps:
- Transform. Transform objects from their own coordinate systems to the coordinate system of the virtual camera.
- Bound. Calculate the bounding volume of each object.
- Split. Split large objects into smaller objects.
- Dice. Convert the object into a grid of micropolygons, each approximately half the size of a pixel.
- Shade. Calculate lighting and shading at each vertex of the micropolygon grid.
- Draw. Scan convert the micropolygons, producing the final 2D image.
A common memory optimization introduces a step called bucketing prior to the dicing step. The output image is divided into a coarse grid. Each grid square is a bucket. The objects are then split roughly along the bucket boundaries and placed into buckets based on their location. Each bucket is diced and drawn individually, and the data from the previous bucket is discarded before the next bucket is processed.
Category: