I'm talking about programming here, for a game or some other kind of graphics program.

There are standard ways of setting up a bunch of inequalities for testing if a given point is inside a polygon, provided you have the coordinates of the vertices. These thingies are a pain in the neck to program, so much busywork. I have a special case: My polygons are regular, i.e. all the sides are the same length:

Say the polygon is an n-gon, i.e. it has n sides, all the same length. Call the polygon's vertices P1 through Pn.

Find the midpoint of the polygon by taking the average of all the x, y and z coordinates, respectively. Or just x and y if this is just in 2 dimensions, of course. Call this P0.

Reflect P0 around each of the sides, resulting in n points Q1 through Qn which are exactly as far outside the polygon as the sides are away from the midpoint. A simple way to do this is to find the midpoint Mi of each of the sides (again by taking the average of the coordinates of their endpoints), finding the vector from P0 to Mi and doubling it.

Now when it comes time to determine if a point Z is inside the polygon or not, just determine its distance to P0 using plain old Pythagoras and then compare that distance with each of the distances to Q1 through Qn. If the first distance is shorter than all the rest, then Z is inside the polygon; if it's equal to one of the other distances, then Z is on one of the sides; and if one of the "other" distances is shorter, Z is outside.

This can be optimized a little by comparing the squares of distances instead of bothering to take all of their square roots. I suspect that the approach with the inequalities is more efficient even for this special case. But I think my approach is... spiffier.

Log in or register to write something here or to contact authors.