Warning: I'm not a CS major. Or an EE. I don't even play one on TV. So all you who are more knowledgeable than I, jump in here, please.
NX, or the Non-Executable flag, is most often implemented as a feature inside a computer's Central Processing Unit, or CPU - specifically, inside the memory management unit (MMU) (thanks jasstrong!). It is intended to increase the stability and security of the machine while it is running; it is used to mark various areas of the computer's memory. In brief, the NX flag tells the computer that a particular area of memory so flagged contains non-executable data; that is, not code. If the program counter is pointing at an area that is so flagged, processing will halt (hopefully with an orderly, recoverable error).
Why do this?
Because this prevents malicious or malformed code from being sneakily (or not so sneakily) executed on the system. It allows for greater security, because programs can be more reliably restricted to areas of known, verified code; there is little danger the computer will start executing code from a data structure, or that a stack smash will execute a JSR to somewhere...not good.
Although NX can be implemented in software, it is much more efficient if done as a flag in the MMU; that way, the machine does not need to spend cycles running software to determine if a particular memory range is in fact exectuable - it can just check on the NX flag in hardware. Much quicker.
Is this a new invention?
Nope. Larger, more expensive processors have had this feature for years. Mainframes typically have it. It is, however, something that only recently has made its way into mainstream PC microprocessors. The x86 architecture does not implement NX; ranges can only be flagged readable/writable, which isn't the same thing. However, Transmeta has announced the their new Efficeon processor (which is, as far as I can tell, an AMD Athlon running on a code morphing Transmeta CPU in emulation) will in fact implement NX in hardware. Therefore, programs which are written and compiled to take advantage of this feature will be able to do so on Efficeon machines. Note: RPGeek has informed me that in fact the first x86 CPU to implement NX was the AMD Opteron, a.k.a. Athlon 64 - so it makes sense the Efficeon is getting it.
How will programmers take advantage of it?
As I understand it (see the warning at the top of this writeup) it will be most commonly utilized by malloc() and like functions - when a block of memory is initially allocated for use, it will be tagged NX or not. The precise syntax, function use, and features will depend on the compiler.
Isn't this sort of like NOP?
No. NOP is an instruction, not a flag - it is simply a null instruction (Null OPeration) which does nothing for a clock cycle.
How might we benefit from this feature in micros?
Ideally, it might make things like Windows Pop-ups and other forms of malware much, much harder to write. Buffer overflows would be much less common, because rather than a buffer overflow being able to write arbitary code onto the stack, the buffer would exist in a range tagged as NX, and hence even if the overflow occurred, the OS wouldn't be able to execute the malicious code there. Generally, it make it more likely that errors will disrupt only a process, rather than the general functioning of the computer. Increased stability, therefore, and better programmatic security both become more easily attainable with this feature implemented. Although as yet there is no support (I believe) for the NX feature in Windows, other OSes support it - Linux supports in on architectures that offer it, for example.
Are there disadvantages?
Yes, some. Any application that uses self-modifying code will likely have to be rewritten to ensure that its method of memory allocation and management doesn't render its modified codebase NX-flagged. One egregious self-modifier, according to several of my coder colleagues, is XFree86 - the freeware X Windows suite. It utilizes self-modifying code inside malloc()-ed ranges, so I'm not sure what'll need to be done to fix that. Also, the coder will lose some flexibility due to the need to pre-plan which memory ranges contain code and which contain data.
Are there other ways of doing this?
Sure. One method some architectures use is to have physically separate code and data memory ranges. This is even more sure, but is of course highly demanding of resources, and wasteful of them to boot.
Other stuff those more knowledgeable than I have offered:
jasmine says: incidentally, having a Harvard architecture isn't wasteful of resources; in some ways it's more efficient, since, for example, instruction caches do not need dirty line writeback logic. I should also point out that the NX flag will make it much more difficult to crack software; seems likely it would make systems like XBox much more difficult to, uh, manipulate.
czeano says: you should mention specific other architectures that have done this for a while, e.g. sparc (ugh $parc blech). SPARC mainly uses it to mark off 32-bit code on its 64-bit processors, so it can be handled specially.
Brontosaurus says: It won't stop buffer overflows happening. It would usually stop one leading to the execution of arbitrary code, but the program would still crash, which could be used to DOS a machine.
ariels says: Regarding Windows, I think I read MS will support NX in Longhorn , instead of TSCB ( Palladium ) or somesuch alphabet soup . Then again, they said they'd do Palladium.