The art of having a bunch of threads (processes which exist within the same address space) working in tandem and generally not mucking up each others' data. Proper multithreading requires the artful use of locks, barriers and/or pipes/sockets, or an algorithm which isn't sensitive to race conditions.

The easiest way to make an object-oriented system threadsafe is to have all object classes which may be accessed by multiple threads at once only use accessor functions which lock and unlock a mutex (or similar locking device) when they begin and end. pthreads takes a lot of work out of this. However, deadlock can occur if you're not careful; the best way to avoid that is to not allow an accessor function to call another locking accessor function without first unlocking itself (which means that chained accessor functions can't be called from within a critical section).

Another good way to make things have pervasive multithreading while also adding major scalability is to make each object a server of some sort which atomically takes in and responds to requests; this is the model adopted by CORBA. However, requests are generally executed serially within each message server unless you make it multithreaded (which goes back to the first bit), and the message-passing scheme doesn't scale well to many problems.

Personally, I think that with good coding practices combined with responsible usage of object-oriented languages such as C++ or Java, it's relatively simple to make pervasively-multithreaded applications.

The previous post gives a good description of what multithreading is, but I thought I'd offer a small example of why it can be useful.

Consider, a program that listens for packets on an UDP socket, performs some operation on the input, and then spits the result back to the sender. I would code this using several threads:

  • One thread that waits for incoming packets and puts each packet in an input (FIFO) queue, then goes back to wait for the next packet.
  • One thread that picks packets out of the input queue, performs the operation, then puts the result in an output (also FIFO) queue. Actually, you could have several threads working in parallell doing this, which would be even more efficient if the operation involves any kind of blocking I/O (such as database access).
  • One thread that takes items from the output queue and sends each packet back to the original sender.
Doing things this is far better than having a single thread that does everything; listening for input, doing calculation and writing output. In that case, every request would have to wait until the previous one had been processed, which is fine if we only get few requests, but slows everything down when traffic goes up. Eventually, the network layer will start dropping incoming packets since its inpur buffer fills up. If there is a thread that reads every packet as it arrives we can avoid that.

Log in or register to write something here or to contact authors.