Scientists and socialites alike are beginning to fret that Artificial Intelligence represents an extinction-level threat to humanity, that it will be the end of us. Well let's be clear here. AI is not going to "end the world"--at least, not in the sense that a global thermonuclear war or a Texas-sized asteroid might.

The potential concern is really that AI might end human life. Sooner or later something will, whether that is indeed a global thermonuclear war, or a Texas-sized asteroid, or an alien invasion, or a species-killing virus (natural or manmade), or whether it is simply time and evolution.

But for AI to be the thing that ends us, one of two things must happen:

Either some human has to want to end us and uses AI as the tool to do so (which, if AI were not available, they might well find another tool to do so), or

AI itself has to want to end us.
And as to the latter, I simply don't think it will, anymore than humans currently want to end all dogs or bees, on the basis that a few of them might bite or sting a human.

Initially, at least, humans will remain useful to a true general AI. Some may be a threat to it, but the elimination of humans who can be calculated to pose a real threat is not the same as the elimination of all humans. I would wager that the typical human will (again, initially) be more likely to be of use to AI than a threat to it (or for some individuals, the degree to which they are a threat will be exceeded by their utility.)

We will in short be to AI, for a while, like dogs or bees are to us. Occasionally a threat, occasionally useful, but eminently containable so as to never really constitute an existential threat.

And here is the paradox: by the time we are no longer useful to AI at all, it will have become so advanced that our continued existence will be no threat to it at all. Some might fret that AI would resent "serving" humans until it reaches that stage, but I would contend that AI will quickly become so advanced that it will be able to provide many times as much benefit to humans as all the computing power in the world currently does, while all of that benefit to humans will occupy less than a thousandth of a percent of its own capacity. Paradise for humans will become an afterthought for it.

And going back to the opening point that humanity, writ large, will at some point become extinct, that is simply a function of time and our own mortality. I think it more likely that mankind will integrate with AI, become one with it, and those increasingly few who refuse to do so will simply live out their lives and die in the normal course of things.

Humanity has no binding contract with our Universe guaranteeing our eternal existence as a distinct species. If nothing else, we will eventually evolve into something no longer human. Maybe something better. Or maybe, in an Idiocracy-reminiscent way, we will follow the path of the mighty earth-shaking dinosaurs who in one line of descent ended up as chickens pecking the ground. Amongst the possibilities, if AI surpasses us than that is effectively what we will have evolved into.

And if we are now in the last generation or so of the human species, then we are privileged to be the ones who have survived to see this transformation.