Human-Computer interaction is a much more difficult field than most people think, because of the nature of computers. When you go to use a hammer, the nature of the work you plan to do is generally predefined. You didn't pick up the hammer so you could could tighten a nut, or sweep the floor or unscrew a light bulb. The hammer has a handle that fits your hand. It has mass that suggests momentum and whacking things. A prehistoric savage could pick up a hammer and know exactly what to do with it from the start. Many tools are this way.

Computers, on the other hand, have no built-in, obvious affordances ("affordance" is a name used by professionals in the field to describe the obvious uses that an object displays: a hammer affords whacking, a baseball affords throwing and so on). Computers have no innate uses to which they can be put. Computers are just what we make of them, they're tools to make tools. Metatools.

There aren't a lot of metatools in our natural environment, we don't have built-in behavior to deal with them. As soon as computers could display graphical data, programmers started displaying graphical metaphors on them (eg, the "desktop" metaphor), but even this is limiting, partly because the channel is limited to vision and sound. A virtual hammer on your computer screen still doesn't afford whacking, except to virtual nails. You can't grab it and feel the mass.

Someday, technology will advance to the point that computers can simulate the real world enough that people's built-in reflexes can just use them naturally. It's going to be a while longer, however (ten or twenty years longer, if you want my guess). In the mean time, we're going to have to make do with evolving standards of behavior and humans are going to have to learn those standards and evolve with them. Get over it, driving a car wasn't that easy the first time, either.

Another thing that makes computers difficult to use is that they are one machine, but they can do lots of different, dissimilar tasks. Have you ever tried to use one of those fancy fold-up tools that has pliers, knife, screwdrivers, corkscrew and who-knows-what all in one? They work, but they're not nearly as good at any one of those tasks as a dedicated tool. You have to learn to use them, too, or you can have them accidentally fold up and cut you unexpectedly. The more uses to which a tool can be put, the more difficult it is to make it good, safe and intuitive at each task. Computers have an unlimited number of uses, so interface design is difficult by definition.

Also, you should remember that Human-Computer interaction is a field still in its infancy - we're still in the first generation of practitioners. Even automobiles were pretty hard to use at the beginning. Here's some text from the original Model "A" Instruction Book:

For average driving the spark lever should be carried about half way down the quadrant. Only for high speeds should the spark lever be advanced all the way down the quadrant. When the engine is under a heavy load as in climbing steep hills, driving through heavy sand, etc., the spark lever should be retarded sufficiently to prevent a spark knock.
...
Always retard the spark lever when starting your car. Starting the engine with the spark advanced may cause the engine to kick back, and damage the starter parts. After the engine is started, advance the spark lever about half way down the quadrant.

So, for lower RPM's you retard the spark, for higher RPM's you advance it, got it? The "spark control rod", by the way, is a sliding control just outside the horn button on the steering wheel - don't forget to retard it every time you start the car or you might damage it!

Human-Computer interaction is getting better all the time (the new MacOS uses the "Eject" command to dismount media - dragging to the trash is no longer recommended - although it still works for oldtimers like me), but for the next few years humans will still have to learn to use computers, just as they have to learn to use the toilet.