Distributed computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 24.128.61.188 (talk) at 06:59, 23 April 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Distributed computing is a method of computer processing in which different parts of a program run simultaneously on two or more computers that are communicating with each other over a network. Distributed computing is a type of parallel computing. But the latter term is most commonly used to refer to processing in which different parts of a program run simultaneously on two or more processors that are part of the same computer. While both types of processing require that a program be parallelized—divided into sections that can run simultaneously, distributed computing also requires that the division of the program take into account the different environments on which the different sections of the program will be running. For example, two computers are likely to have different file systems and different hardware components.

An example of distributed computing is BOINC, a service in which large problems can be divided into many small problems which are distributed to many computers. Later, the small results are reassembled into a larger solution.

Distributed computing is a natural result of the use of networks to allow computers to efficiently communicate. But distributed computing is distinct from networking. The latter refers to two or more computers interacting with each other, but not, typically, sharing the processing of a single program. The World Wide Web is an example of a network, but not an example of distributed computing.

There are numerous technologies and standards used to construct distributed computations, including some which are specially designed and optimized for that purpose, such as Remote Procedure Calls (RPC) or Remote Method Invocation (RMI) or .NET Remoting.

Organization

Organizing the interaction between each computer is of prime importance. In order to be able to use the widest possible range and types of computers, the protocol or communication channel should not contain or use any information that may not be understood by certain machines. Special care must also be taken that messages are indeed delivered correctly and that invalid messages are rejected which would otherwise bring down the system and perhaps the rest of the network.

Another important factor is the ability to send software to another computer in a portable way so that it may execute and interact with the existing network. This may not always be possible or practical when using differing hardware and resources, in which case other methods must be used such as cross-compiling or manually porting this software.

Goals and advantages

There are many different types of distributed computing systems and many challenges to overcome in successfully designing one. The main goal of a distributed computing system is to connect users and resources in a transparent, open, and scalable way. Ideally this arrangement is drastically more fault tolerant and more powerful than many combinations of stand-alone computer systems.

Openness

Openness is the property of distributed systems such that each subsystem is continually open to interaction with other systems (see references). Web Services protocols are standards which enable distributed systems to be extended and scaled. In general, an open system that scales has an advantage over a perfectly closed and self-contained system.

Consequently, open distributed systems are required to meet the following challenges:

Monotonicity
Once something is published in an open system, it cannot be taken back.
Pluralism
Different subsystems of an open distributed system include heterogeneous, overlapping and possibly conflicting information. There is no central arbiter of truth in open distributed systems.
Unbounded nondeterminism
Asynchronously, different subsystems can come up and go down and communication links can come in and go out between subsystems of an open distributed system. Therefore the time that it will take to complete an operation cannot be bounded in advance (see unbounded nondeterminism).

Scalability

A scalable system is one that can easily be altered to accommodate changes in the number of users, resources and computing entities affected to it. Scalability can be measured in three different dimensions:

Load scalability
A distributed system should make it easy for us to expand and contract its resource pool to accommodate heavier or lighter loads.
Geographic scalability
A geographically scalable system is one that maintains its usefulness and usability, regardless of how far apart its users or resources are.
Administrative scalability
No matter how many different organizations need to share a single distributed system, it should still be easy to use and manage.

Some loss of performance may occur in a system that allows itself to scale in one or more of these dimensions.

Drawbacks and disadvantages

If not planned properly, a distributed system can decrease the overall reliability of computations if the unavailability of a node can cause disruption of the other nodes. Leslie Lamport describes this type of distributed-system fragility like this: "You know you have one when the crash of a computer you've never heard of stops you from getting any work done."[citation needed]

Troubleshooting and diagnosing problems in a distributed system can also become more difficult, because the analysis may now require connecting to remote nodes or inspecting communications being sent between nodes.

Not many types of computation are well suited for distributed environments, owing to, typically, the amount of network communication or synchronization that would be required between nodes. If bandwidth, latency, or communication requirements are too significant, then the benefits of distributed computing may be negated and the performance may be worse than a non-distributed environment.

Architecture

Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely-coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.

Distributed programming typically falls into one of several basic architectures or categories: Client-server, 3-tier architecture, N-tier architecture, Distributed objects, loose coupling, or tight coupling.

  • Client-server — Smart client code contacts the server for data, then formats and displays it to the user. Input at the client is committed back to the server when it represents a permanent change.
  • 3-tier architecture — Three tier systems move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are 3-Tier.
  • N-tier architecture — N-Tier refers typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
  • Tightly coupled (clustered) — refers typically to a set of highly integrated machines that run the same process in parallel, subdividing the task in parts that are made individually by each one, and then put back together to make the final result.
  • Peer-to-peer — an architecture where there is no special machine or machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers.

Concurrency

Distributed computing implements a kind of concurrency. It interrelates tightly with concurrent programming so much that they are sometimes not thought as distinct subjects [1].

Multiprocessor systems

A multiprocessor system is simply a computer that has more than one CPU on its motherboard. If the operating system is built to take advantage of this, it can run different processes (or different threads belonging to the same process) on different CPUs.

Multicore systems

Intel CPUs from the late Pentium 4 era (Northwood and Prescott cores) employed a technology called Hyperthreading that allowed more than one thread (usually two) to run on the same CPU. The more recent Sun UltraSPARC T1, AMD Athlon 64 X2, AMD Athlon FX, AMD Opteron, Intel Pentium D, Intel Core, Intel Core 2 and Intel Xeon processors feature multiple processor cores to also increase the number of concurrent threads they can run.

Multicomputer systems

A multicomputer system is a system made up of several independent computers interconnected by a telecommunications network.

Multicomputer systems can be homogeneous or heterogeneous: A homogeneous distributed system is one where all CPUs are similar and are connected by a single type of network. They are often used for parallel computing.

A heterogeneous distributed system is made up of different kinds of computers, possibly with vastly differing memory sizes, processing power and even basic underlying architecture. They are in widespread use today, with many companies adopting this architecture owing to the speed with which hardware goes obsolete and the cost of upgrading a whole system simultaneously.

Computing taxonomies

The types of distributed computers are based on Flynn's taxonomy of systems; single instruction, single data (SISD), single instruction, multiple data (SIMD), multiple instruction, single data (MISD), and multiple instruction, multiple data (MIMD). Other taxonomies and architectures available at Computer architecture and in Category:Computer architecture.

Computer clusters

A cluster consists of multiple stand-alone machines acting in parallel across a local high speed network. Distributed computing differs from cluster computing in that computers in a distributed computing environment are typically not exclusively running "group" tasks, whereas clustered computers are usually much more tightly coupled. Distributed computing also often consists of machines which are widely separated geographically.

Grid computing

A grid uses the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems. Most use idle time on many thousands of computers throughout the world. Such arrangements permit handling of data that would otherwise require the power of expensive supercomputers or would have been impossible to analyze.

Languages

Nearly any programming language that has access to the full hardware of the system could handle distributed programming given enough time and code. Remote procedure calls distribute operating system commands over a network connection. Systems like CORBA, Microsoft D/COM, Java RMI and others, try to map object oriented design to the network. Loosely coupled systems communicate through intermediate documents that are typically human readable (eg. XML, HTML, SGML, X.500, and EDI).

Languages specifically tailored for distributed programming are:

Examples

Projects

A variety of distributed computing projects have grown up in recent years. Many are run on a volunteer basis, and involve users donating their unused computational power to work on interesting computational problems. Examples of such projects include the Stanford University Chemistry Department Folding@home project, which is focused on simulations of protein folding to find disease cures; World Community Grid, an effort to create the world's largest public computing grid to tackle scientific research projects that benefit humanity, run and funded by IBM; SETI@home, which is focused on analyzing radio-telescope data to find evidence of intelligent signals from space, hosted by the Space Sciences Laboratory at the University of California, Berkeley; and distributed.net, which is focused on breaking various cryptographic ciphers.[3]

Distributed computing projects also often involve competition with other distributed systems. This competition may be for prestige, or it may be a matter of enticing users to donate processing power to a specific project. For example, stat races are a measure of the work a distributed computing project has been able to compute over the past day or week. This has been found to be so important in practice that virtually all distributed computing projects offer online statistical analyses of their performances, updated at least daily if not in real-time.


See also

References

  1. ^ CS236370 Concurrent and Distributed Programming 2002
  2. ^ Ada Reference Manual, ISO/IEC 8652:2005(E) Ed. 3, Annex E Distributed Systems
  3. ^ David P. Anderson (2005-05-23). "A Million Years of Computing" (PDF). Retrieved 2006-08-11. {{cite journal}}: Cite journal requires |journal= (help)

Further reading

  • Attiya, Hagit and Welch, Jennifer (2004). Distributed Computing: Fundamentals, Simulations, and Advanced Topics. Wiley-Interscience.{{cite book}}: CS1 maint: multiple names: authors list (link) ISBN: 0471453242.
  • Lynch, Nancy A (1997). Distributed Computing. Morgan Kaufmann. ISBN: 1558603484.
  • Tel, Gerard (1994). Introduction to Distributed Algorithms. Cambridge University Press.
  • Davies, Antony (2004). "Computational Intermediation and the Evolution of Computation as a Commodity" (PDF). Applied Economics. {{cite journal}}: Unknown parameter |month= ignored (help)
  • Kornfeld, William (1981). "The Scientific Community Metaphor". MIT AI (Memo 641). {{cite journal}}: Unknown parameter |coauthor= ignored (|author= suggested) (help); Unknown parameter |month= ignored (help)
  • Hewitt, Carl (1983). "Analyzing the Roles of Descriptions and Actions in Open Systems". Proceedings of the National Conference on Artificial Intelligence. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help); Unknown parameter |coauthor= ignored (|author= suggested) (help); Unknown parameter |month= ignored (help)
  • Hewitt, Carl (1985). "The Challenge of Open Systems". Byte Magazine. {{cite journal}}: Unknown parameter |month= ignored (help)
  • Hewitt, Carl (1999-10-23–1999-10-27). "Towards Open Information Systems Semantics". Proceedings of 10th International Workshop on Distributed Artificial Intelligence. Bandera, Texas. {{cite conference}}: Check date values in: |date= (help); Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  • Hewitt, Carl (1991). "Open Information Systems Semantics". Journal of Artificial Intelligence. {{cite journal}}: Unknown parameter |month= ignored (help)
  • Distributed Systems and Recent Innovations: Challenges and Benefits

External links