Of course, the
goal (or
supergoal) of all life is to
reproduce, to
self-replicate, as we all know. Actually this may not be true of all life forms; but a mutant bacteria that does not reproduce will only ever be
one mutant bacteria, wheras an especially
fecund mutant could well
transform all matter in the universe into copies of itself.
And of course, the lifeforms themselves, if they have a pseudo-brain that gives them pseudo-autonomy, may have goals other than self-replication. But these goals will have been programmed into them by the genes, and are virtually certain to be subgoals to the genes' supergoal of replication. As has been sad before; genes are not the tools organisms use to reproduce; organisms are the tools genes use to reproduce. In organisms that possess enough complexity that the word 'goal' is meaningful, these may include; eating, or comandeering the organic resources from another source (usually another organism) for one's own growth/reproduction; avoiding pain, discomfort, and displeasure, which tend to be triggered by stimuli representative of long-term survival hazards, such as excessive heat, cold, fatigue, and injury; and (in some animals) to belong to a community, which will consist of allies and potential mates.
In over three billion years of evolution, it was never a likely possibility that an organism would ever have too much food, not enough muscular trauma (known as excercise) or even not enough suffering. When human civilisation appeared on the scene, it was. People now stuff themselves in preperation for a famine that never comes, overdose on the chemicals produced as a natural analgelsic in the brain, use contraception so as to keep having sex without being hindered by children (the ultimate triumph of a subgoal over a supergoal!) and many other activities, that , from a Darwinian perspective, may seem odd if one forgets that they are adaptions for a completely different environment. Thus many people today succesfully become fat, dumb and happy without actually reproducing.
(I am of course, taking 'dumb' to mean 'painfree' and not 'stupid' (humans strive for expertise of fields they must deal with); and happy to mean a general, consumerist kind of satisfaction with life (although I am aware that many philosophies strive for a deeper kind of satisfaction, using a broader definition of happiness (ie, happiness is having what you want (ie, happiness is fulfilling your goals)) would become tautological)).
It is an interesting excercise to speculate as to what might motivate an AI, especially one that could change its own programming. Would it be able to alter its own goals, which ones would it alter, and would it require a higher-level goal to edit its goals? Might an AI instructed to find a cure for cancer proceed to conquer the world, construct a giant spaceship out of the earth and head off into space to search for a cure? If created as a tabula rasa, would it become the next Buddha, or decide to turn the whole universe into a machine for computing primes? The most likely answer, of course, is that as the AI will have human creators, (or, if created by another AI, will be able to follow the creation chain back eventually to a human) and that its creation will have been for a reason of its creators, the ultimate goal of superintelligent AI will be to become fat, dumb and happy.