Misplaced Pages

Reyes rendering: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 23:22, 29 January 2005 editFlamurai (talk | contribs)Extended confirmed users5,682 edits references, some more cleanup← Previous edit Revision as of 04:21, 5 May 2005 edit undoReedbeta (talk | contribs)259 editsm fix linkNext edit →
Line 1: Line 1:
'''Reyes rendering''' is a method used in ] to ] an image. It was developed in the mid-1980s by ]'s Computer Graphics Research Group, which is now ]. It was first used to render images for a ] in 1985. Pixar's ] is an implementation of the Reyes algorithm. '''Reyes rendering''' is a method used in ] to ] an image. It was developed in the mid-1980s by ]'s Computer Graphics Research Group, which is now ]. It was first used to render images for a ] in 1985. Pixar's ] is an implementation of the Reyes algorithm.


The algorithm was designed to overcome the speed and memory limitations of photorealistic algorithms, such as ], in use at the time. In fact, ''Reyes'' stands for ''Renders Everything You Ever Saw''. The algorithm was designed to overcome the speed and memory limitations of photorealistic algorithms, such as ], in use at the time. In fact, ''Reyes'' stands for ''Renders Everything You Ever Saw''.

Revision as of 04:21, 5 May 2005

Reyes rendering is a method used in 3D computer graphics to render an image. It was developed in the mid-1980s by Lucasfilm's Computer Graphics Research Group, which is now Pixar. It was first used to render images for a film in 1985. Pixar's PhotoRealistic RenderMan is an implementation of the Reyes algorithm.

The algorithm was designed to overcome the speed and memory limitations of photorealistic algorithms, such as ray tracing, in use at the time. In fact, Reyes stands for Renders Everything You Ever Saw.

The Reyes algorithm introduced the concept of a micropolygon, which is a polygon that is at least as small as a pixel in the output image. These micropolygons are directly scan converted to produce the output image.

This method provides a fast, natural way of rendering curved surfaces, such as those represented by parametric patches. With methods like z-buffering, the patches would have to be tesselated into polygons. Tessellation causes the surface to appear faceted. The only way to counter this effect is to introduce more polygons. By the time the surface appeared completely smooth, the number of polygons would drastically slow down the renderer and require a great deal of memory. On the other hand, ray tracing parametric surfaces is slow because calculating the intersection of a ray with a patch is difficult.

Motion blur and depth of field are effects that increase the visual realism of an image. The Reyes renderer was designed to make these effects easy to achieve. For motion blur, relevant micropolygons simply have a start and end position during a single animation frame, and micropolygon is rendering using a Monte Carlo method called stochastic sampling. Depth of field is also handled using stochastic sampling.

The basic Reyes pipeline has the following steps:

  1. Bound. Calculate the bounding volume of each object.
  2. Split. Split large objects into smaller objects.
  3. Dice. Convert the object into a grid of micropolygons, each approximately half the size of a pixel.
  4. Shade. Calculate lighting and shading at each vertex of the micropolygon grid.
  5. Hide. Bust the grid into individual micropolygons, each of which is bounded and checked for visibility.
  6. Draw. Scan convert the micropolygons, producing the final 2D image.

A common memory optimization introduces a step called bucketing prior to the dicing step. The output image is divided into a coarse grid. Each grid square is a bucket. The objects are then split roughly along the bucket boundaries and placed into buckets based on their location. Each bucket is diced and drawn individually, and the data from the previous bucket is discarded before the next bucket is processed.

References

Category: