In the context of computer science, "efficient" means computable in polynomial time with respect to the size of the input. In other words, an algorithm is efficient if on input of size n it terminates after p(n) steps where p() is some polynomial.

There are lots of hard problems for which no efficient solution is known. Factoring, for example. The best algorithm we have (the Number Field Sieve) is exponential time. In other words, to factor an n bit number takes about cn steps for some c in the worst case.

How big is this difference? Humongous. For example, you can run an n3 (O(n3)) algorithm on an input of size 1000 trivially on a personal computer. The same input to an exponential time algorithm would take far more steps to complete than there are atoms in the universe.

Note to CS people: I've obviously swept some details under the rug.