There is a tendency when people talk about AI safety to say something like "what if we give an AI the goal of reducing cancer, and the AI solves the problem by reducing the human population to zero? No humans, no cancer! Of course, that wouldn't really happen, but it's an illustration of the sort of thing we have to expect on a higher level." I think that last sentence is very wrong.
Humans are pretty smart, individually. In a large group, we are really really smart, and build rockets and MRIs and Tokyo. But also, groups of humans have come to the serious, actionable conclusion that the optimal number of, for example, Jews, is zero Jews. This is very much an example chosen because you can easily think of multiple times where zero-Jew targets were actively pursued by powerful agents -- governments -- in the real world. This is not the only time humans have accidentally done silly toy-model type things; Mutually Assured Destruction wasn't pretend. We have eliminated entire species from the face of the Earth. And to be clear, we have accidentally made species extinct, including good animals that we like to eat and/or collect pieces of, and we have been doing this for ten of thousands of years, and still are. By accident.
The idea that things won't happen because they are obviously stupid is itself obviously stupid.
The idea that bad things won't happen because AIs would be smarter than us is dangerous for many reasons: because the agencies that do bad things now (corporations, governments, religions) are more powerful than any individual human, have the ability to be more intelligent than any individual human, and often are more intelligent than any individual human; because AI does not mean smart, and AI risk does not depend on superintelligence, but on ability to do damage in the real world; and because being smart does not mean being good, in humans or in anything else.
Toy models and flippant examples are a starting point, and we do have to think past them, to be aware than an actor alien to us will often think of things we do not; but they are not off the table or even very silly; they are exactly the sort of thing that happens. The fact that we may, potentially, someday not be the only group of idiots on the planet does not guarantee a safer level of idiocy.