Instrumental convergence is the idea that agents, of whatever type, will have some predictable goals. This is mostly talked about in the area of AI safety, but will also apply to aliens, your neighbor Shirley, corporations, and anything else that acts like an agent.

As noted through thought experiments like the paperclip maximizer and philosophical ponderings such as the orthogonality thesis, being very smart, very powerful, or very capable does not particularly suggest that an agent has predictable goals. A genius may be an evil genius, and a very powerful being might be mostly interested in stamp collecting, God, fast cars, or any other arbitrary inanities. But you can predict certain things that any agent will work towards.

This is simply because whether you are good or evil, a philatelist or a philanderer, you can do you better if you have money, power, energy, or other various and sundry resources; you will also highly value your continued existence, and fight and finagle to prevent anyone terminating you. These are instrumental goals, goals that you want because they help you get more of your terminal goals done. The result is that we can predict that one of the basic goals that an agent will have is to snaffle up lots of resources. In the case of AIs, this may mean turning the world into computronium, eventually, but perhaps simply maximizing monetary income in the short term.