If I were to wax polemical, I would say the fundamental problem with Hungarian notation is that it emphasizes exactly the information that is already syntactically apparent. That is, if I see

class Foo {
    int *bar;
    ...

the one thing I can say about bar is that it is a member variable of Foo and that it is a pointer to int. It is impossible to deduce anything about what the programmer intended it to be used for, which is why meaningful variable names are important. If the code is changed to read

class Foo {
    int *m_piBar;
    ...

then we still get no more information, but we used up space that could have been used for a more clarifying name. In other words, if we happen to program in a language that forces you to write out the type of every variable, we might as well take advantage of that.

Some advocates of Hungarian notation stress that it can give us more information about a variable that the type alone. For example, char *szFoo explicitly states that Foo points to a zero-terminated string, while char *Foo might point to a single char somwhere in memory.

One way of including such extra information in a way that is closer to how C types are usually written is to use typedefs. For example we can typedef char *string, and then use string foo instead of char *foo where appropriate. A less trivial example of this idea: when writing an intepreter for say Lisp or another dynamically typed language, one typically ends up with some type POBJECT representing pointers to generic Lisp objects. When then writing C code to act on Lisp data, I would use typedefs like these:

typedef POBJECT Object;
typedef POBJECT Fixnum;
typedef POBJECT String;
typedef POBJECT Cons;
/* ... */

Object car(Cons x) {
    CHECK_ISCONS(x);
    /* return the car field of the cons cell x... */
}

Using this style, one can include more information about what kind of object x points to without cluttering up every variable name.