Both of these are approaches as to how to do
I/O on
computers and other
digital systems. The basic problem is this: there are bits of hardware (like
sensors,
hard disks, etc) that the
CPU has asked to do something. Problem is, that the requested act may not take a predetermined amount of time ... it might take 3
microseconds or it might take 20
milliseconds.
So the CPU has to somehow figure out when the data it requested is ready. And this is where the two approaches fit in.
The first approach ... polling works like this: The CPU keeps asking the device it's talking to "Is the data ready yet?" (Think the episode of The Simpsons where they go for a vacation in the family car. The conversation goes:
Lisa and Bart: Are we there yet?
Homer: No.
Lisa and Bart: Are we there yet?
Homer: No.
Lisa and Bart: Are we there yet?
Homer: No.
This is an example of polling).
Similarly with digital systems, the CPU simply keeps asking the device if it's finished until it says yes. This is usually practically implemented by having the device of interest sitting on the memory bus, so the CPU reads a special address in memory, and the device says "That's me!" and responds with its status. (Actually, there's a slight difference here too ... some digital systems use memory-mapped I/O, while others use I/O mapped I/O, but we'll leave that for another node).
Advantages are that it's simple, it doesn't require any special hardware or CPU design to do, and no messy asynchronous stuff needs to be handled. The big disadvantage is that the CPU wastes its time sitting there basically asking the same question over and over again when it could be doing useful work. Of course, it may be that the CPU has nothing else to do, so nobody minds.
The alternative is interrupts. Interrupts work more like "Here's something to do, tell me when it's done, OK?" So the CPU tells a device to go do something, and then waits for what is called an interrupt from the device saying "Yep, I'm done. Do something about it!" at which case the CPU goes "Okay!" and stops what it's doing, and handles the interrupt, then goes back to what it's doing.
The digital practicality is that devices can connect to the CPU using special interrupt lines. When they are ready, they set the line to a particular binary value. Then there's a protocol about how the CPU let's the device know that it knows that there's an interrupt and it will get to it when it can. It then handles the interrupt by reading memory on the device, much as for the polling case.
Of course the interrupt I/O mechanism looks more elegant. But the problem is that it opens a massive can of worms. For starters, you need more hardware. You need actual "interrupt" wires that connect back to the CPU from every device. The CPU needs to be able to "jump" out of what it's doing, handle the interrupt, then get back to what it's doing -- no easy feat, and one that requires quite tricky design. Then, if I have more than one device attached to the CPU, things get really hairy. What happens if I get an interrupt while I'm handling another interrupt?
So, there's a design tradeoff. Go for a simple, but wasteful solution, or a complicated, far more general one.
Most modern, moderately interesting digital systems use interrupts, including almost all computers. However, there are some really simple embedded systems used in small robots, alarm clocks etc, that still use polling, because it's the right tool for the job.