The AI(s) were smart enough to analyze the way that the cones inside the humane eye interpret color through the optic nerve, and thus knew that the human brain would take care of this by itself. Color actually, ironically enough, depicts absence of certain pigments (as opposed to presence).
When light strikes a piece of green cloth, pigments in the cloth absorb the light. Any wavelengths of light that aren't absorbed, are reflected back off the object. When we see color, we are seeing these reflected wavelengths. Black means all wavelengths absorbed, white means none.
The human brain associates different colors with wavelengths of light. There's no way to tell (yet) if two people associate the same color in there mind with the same wavelenght of light. As a child, if they pointed at something green in their mind, and their parents said it was red, the word red would be associated with the color green in that persons mind. Therefor, two people looking at the cloth could call it green and really be seeing different colors. Although the colors may change in the mind's eye, the wavelengths associated with words do not.
All the AI(s) had to worry about was knowing which pigments reflected which wavelenghts of light. To them, it was simply a chemical formula of sorts. I imagine that would be fairly trivial when you're an artificial intelligence who has managed to spawn an entire race of machines.