By early 2000 a revolution was underway in an entirely new form of peer-to-peer computing. Sparked by the phenomenal success of a number of highly publicised applications, P2P computing – as it is commonly referred to – heralded a new computing model for the Internet age and had achieved considerable traction with mainstream computer users and members of the PC industry in a very short space of time:
- the Napster MP3 music file sharing application went live in September 1999, and attracted more than 20 million users by mid-2000
- by the end of 2000, over 100 companies and numerous research projects were engaged in P2P computing
- by early the following year, the SETI@home program, which uses distributed processing to analyse radio telescope data, had attracted more than 2.6 million users who had donated over 500,000 years of CPU time to the hunt for extraterrestrial intelligence.
P2P computing provides an alternative to the traditional client-server architecture and can be simply defined as the sharing of computer resources and services by direct exchange. While employing the existing network, servers, and clients infrastructure, P2P offers a computing model that is orthogonal to the client-server model. The two models coexist, intersect, and complement each other.
In a client-server model, the client makes requests of the server with which it is networked. The server, typically an unattended system, responds to the requests and acts on them. With P2P computing, each participating computer – referred to as peer – functions as a client with a layer of server functionality. This allows the peer to act both as a client and as a server within the context of a given application. A peer can initiate requests, and it can respond to requests from other peers in the network. The ability to make direct exchanges with other users offers a number of compelling advantages – both technical and social – to individual users and large organisations alike.
Technically, P2P provides the opportunity to make use of vast untapped resources that go unused without it. These resources include processing power for large-scale computations and enormous storage potential. P2P allows the elimination of the single-source bottleneck. P2P can be used to distribute data and control and load-balance requests across the Internet. In addition to helping optimise performance, the P2P mechanism also may be used to eliminate the risk of a single point of failure. When P2P is used within the enterprise, it may be able to replace some costly data centre functions with distributed services between clients. Storage, for data retrieval and backup, can be placed on clients. In addition, the P2P infrastructure allows direct access and shared space, and this can enable remote maintenance capability.
Much of the wide appeal of P2P is due to social and psychological factors. For example, users can easily form their own autonomous online Internet communities and run them as they collectively choose. Many of these P2P communities will be ever changing and dynamic in that users can come and go, or be active or not. Other users will enjoy the ability to bypass centralised control. Effectively, P2P has the power to make users autonomous.