Starvation in computer science has actually two meanings. Both are closely related to processes and their communication (IPC). The first one, which ORBMan already mentions is very similar to a livelock. One example of such a thing would be a wrong solution for the dining philosophers problem:
When the philosophers want to eat, they first take their left fork and then try to take their right one. If they are unable to take the right one, they lay the left one down. Although this solution works most of the time, situations can occur, when no philosopher can eat and consequently starves to death. Such a situation would happen, if all the philosophers try to get their left fork at the same time. No philosopher would be able to get the right one and would lay the left one down. The philosophers would continue forever trying to get two forks for eating.
Another form of starvation can occur with badly designed scheduling algorithms. An operating system using simple priority based scheduling, which just chooses running processes by their priority, can let a low-priority process starve. If a process with high priority is runable all the time, it will choose this process and the low-priority one, won't get any processor time.
Sometimes both versions of starvation can happen. Think about an operating system, which doesn't know blocking. Processes, waiting for others, have to use busy waiting. If a high-priority process had to wait for a low-priority one, both would starve.
The first one can be avoided by using mutual exclusion via semaphores or monitors. If you want to see practical implementations, you should read the dining philosophers problem or the sleeping barber problem. The second one can be solved by using better scheduling algorithms. One simple solution would be the using of dynamic priorities. The scheduler would regularly scan for starving processes and boost their priorities.