In ray tracing programs, or other software where the image is being generated from input which allows an arbitrary resolution, anti-aliasing is accomplished by supersampling - colour values are assigned by calculating the values obtained for a number of points inside each pixel, and then averaging these, to get the final value for the rendered pixel.

Imagine our ray tracer is rendering a sloping dark object, against a white background.

Using one ascii character per pixel, with no antialiasing, we get something like:


             ####
           ######
          #######
        #########
       ##########
     ############

With antialiasing, we 'zoom in' on each pixel and calculate a colour value at several points 'inside':
   
     ---------------------
     |    |    |    |####|
     |    |    |   #|####|
     |    |    | ###|####|
     ---------------------
     |    |    |####|####|
     |    |  ##|####|####|
     |    |####|####|####|
     ---------------------
     |   #|####|####|####|
     | ###|####|####|####|
     |####|####|####|####|
     ---------------------

(each 'box' represents one pixel.)

Averaging the dark and light 'hits' on the edge area then produces a final 'rendering' something more like


            o####
          .######
         o#######
       .#########
      o##########
     ############

(where 'o' and '.' are intended to represent intermediate (gray) shades between the light and dark)

Though the final image is still pixelated at the same resolution, the 'grayness' around the edges fools the eye of the viewer into thinking the diagonal line is smoother than in the non-anti-aliased version.

As usual in ray tracing, there is no gain in image quality without considerable pain - the extra calculations increase the rendering time by a factor roughly equal to the number of samples per pixel.