There are very few truly decentralized software systems in existence. Usually, we only get halfway there. "Peer-to-peer" is a great idea in theory, but actually implementing it presents a host of problems.
Napster is the classic example: every client is a peer sharing with other peers, but they need to use a central server to identify the actual files they're sharing to other users. Even if there isn't a single central server everyone uses, there are several smaller servers which don't communicate with each other. (IRC works more or less the same way.) The purpose of Napster's model isn't decentralization, but to let users download large files from other users. The server itself serves only as a specialized database for clients to find each other, keeping the traffic to and from it to a minimum.
Gnutella was supposed to solve all of Napster's problems by being truly decentralized, but in doing so it turns everybody into a server. And since not every server contains all the files people might be looking for, it creates problems of scalability. Small Gnutella networks work very well, but only contain a subset of all the files users might be looking for. Scaling it to a large network means that searches produce exponentially more traffic, reducing the speed of the actual file transfers.
The Internet is supposed to be decentralized, but its reliance on DNS nameservers cripples it somewhat. There are a relatively small number of nameservers that can look up every domain name and resolve it to an IP address, and if one of them goes down or becomes inaccessible, a very large chunk of the Internet is suddenly invisible to the rest of it. In addition, a single Web site is almost never decentralized, since all of its data is provided by a single server or cluster of servers. Systems like Akamai, which replicate large files around the world to provide fast local downloads, are the exception, but even this is just a traffic-thinning measure similar to Napster's, instead of true decentralization.
Usenet may be the only truly decentralized widespread system, because the newsservers are all peers networked to each other without any central server. As a post goes out from a user, it is passed from one server to another, gradually propagating itself around the entire world (and much more quickly than you might expect). If any one newsserver goes down, the others it was linked to are still connected to the rest of the network.
It works because old posts die after a set amount of time, because different newsservers can choose which newsgroups they track, and because most of the posts (in theory, anyways) are relatively small text files. Storing all the posts for even a large number of newsgroups takes up a relatively small amount of storage, and transmitting the text files on a regular basis is not network-intensive. Compare this to Gnutella, which tries to provide a permanent library of very large files without separating the client from the server.
Even a partially-decentralized network has advantages over the fully-centralized sort, but as long as there is any sort of essential reliance on one server, or a few servers, the advantages are limited. A thinning of network traffic to and from the server is achieved, but total independence from it is not. Only when files are relatively small is total distribution made practical and full decentralization achievable.