Scale in Distributed Systems (Clifford)
Scale in Distributed Systems (Clifford)
Abstract
In recent years, scale has become a factor of increasing importance in the design of distributed
systems. The scale of a system has three dimensions: numerical, geographical, and admin-
istrative. The numerical dimension consists of the number of users of the system, and the
number of objects and services encompassed. The geographical dimension consists of the dis-
tance over which the system is scattered. The administrative dimension consists of the number
of organizations that exert control over pieces of the system.
The three dimensions of scale aect distributed systems in many ways. Among the aected
components are naming, authentication, authorization, accounting, communication, the use of
remote resources, and the mechanisms by which users view the system. Scale aects reliability:
as a system scales numerically, the likelihood that some host will be down increases; as it
scales geographically, the likelihood that all hosts can communicate will decrease. Scale also
aects performance: its numerical component aects the load on the servers and the amount
of communication; its geographic component aects communication latency. Administrative
complexity is also aected by scale: administration becomes more dicult as changes become
more frequent and as they require the interaction of dierent administrative entities, possibly
with con
icting policies. Finally, scale aects heterogeneity: as the size of a system grows it
becomes less likely that all pieces will be identical.
This paper looks at scale and how it aects distributed systems. Approaches taken by existing
systems are examined and their common aspects highlighted. The limits of scalability in these
systems are discussed. A set of principles for scalable systems is presented along with a list of
questions to be asked when considering how far a system scales.
this can only be done with a very limited number of IDs may be thought of as addresses. They usu-
objects and does not scale. ally contain information identifying the server
5
that maintains the object, and an identier to in the directory to another UID, and to add a
be interpreted by the server. The information symbolic name/UID pair to the directory. A
identifying the server might be an address or it directory can contain UIDs for les, other di-
might be a unique identier to be included in rectories, or in fact, any object for which a UID
requests broadcast by the client. A client need- exists.
ing to access an object or service is expected
to already possess its unique identier. The load on directory servers is easily dis-
tributed. There is no requirement that a sub-
A problem with uid-based naming is that ob- directory be on the same server as its par-
jects move, but the UIDs often identify the ent. Dierent parts of a name space can re-
server on which an object resides. Since the side on dierent machines. Replication can be
UIDs are scattered about without any way to supported by associating multiple UIDs with
nd them all, they might continue to exist with the same symbolic name, or through the use
incorrect addresses for the objects they refer- of UIDs that identify multiple replicas of the
ence. A technique often used to solve this prob- same object or directory.
lem is forwarding pointers [8]. With forward-
ing pointers, a user attempting to use an old The primary dierences between a name server
address to access an object is given a new UID and a directory server is that the directory
containing the new address. A drawback to for- server usually possess little information about
warding pointers is that the chain of links to the full name of an object. A directory
be followed can become lengthy. This reduces server can support pieces of independent name
performance, and if one of the nodes in the spaces, and it is possible for those name spaces
chain is down, it prevents access to the object. to overlap, or even to contain cycles. Both
This drawback is solved in Emerald by requir- Prospero and Amoeba use directory servers to
ing that each object have a home site and that translate names to UIDs.
the forwarding pointer at that site is kept up to
date. Another solution is for the client to up-
date the forwarding pointers traversed if sub- 4.5 Growth and Reorganization
sequent forwarding pointers are encountered.
For a system to be scalable, it must be able to
Prospero [20] supports UIDs with expiration grow gracefully. If two organizations with sep-
dates. Its directory service guarantees that arate global name spaces merge, reorganize, or
the UIDs it maintains are kept up-to-date. By otherwise combine their name spaces, a prob-
using expiration dates, it becomes possible to lem arises if the name spaces are not disjoint.
get rid of forwarding pointers once all possible The problem arises because one or both name
UIDs with the old address have expired. spaces suddenly change. The new names corre-
spond to the old names, but with a new prex
4.4 Directory Services corresponding to the point in the new name
space at which the original name space was at-
tached. This causes problems for any names
Even in uid-based systems, it is often desirable which were hardcoded in programs or other-
to translate from symbolic names that humans wise specied before the change.
use into the UIDs for the named objects. Di-
rectory service do this. Given a UID for a di- DEC's Global Name Service [14] addresses this
rectory it is possible to read the contents of problem by associating a unique number with
that directory, to map from a symbolic name the root of every independent name space.
6
When a le name is stored, the number for 5.1 Authentication
the root of the name space can be stored along
with the name. When name spaces are merged, Several techniques are used to authenticate
an entry is made in the new root pairing the users in distributed systems. The simplest,
unique ID of each previous root with the prex the use of passwords on each host, requires
required to nd it. When a name with an as- maintenance of a password database on mul-
sociated root ID is resolved, the ID is checked, tiple nodes. To make it easier to administer,
and if it doesn't match that for the current Grapevine [3] supported a central service to
root, the corresponding prex is prepended, al- verify passwords. Password-based authentica-
lowing the hardcoded name to work. tion can be cumbersome if the user is required
to present a password each time a new service
is requested. Unfortunately, letting the work-
station remember the users password is risky.
5 The Security Subsystem Password based authentication is also vulnera-
ble to the theft of passwords by attackers that
can eavesdrop on the network.
As the size of a system grows, security becomes Host-based authentication, as used for rlogin
increasingly important and increasingly di- and rsh in Berkeley Unix, has problems too.
cult to implement. The bigger the system, the In host-based authentication, the client is au-
more vulnerable it is to attack: there are more thenticated by the local host. Remote servers
points from which an intruder can enter the trust the host to properly identify the client.
network; the system has a greater number of As one loses control of the nodes in a system,
legitimate users; and it is more likely that the one is less willing to trust the claims made by
users will have con
icting goals. This is partic- other systems about the identity of its users.
ularly troublesome when a distributed system
spans administrative boundaries. The security Encryption-based authentication does not suf-
mechanisms employed in dierent parts of a fer from these problems. Passwords are never
system will have dierent strengths. It is im- sent across the network. Instead, each user
portant that the eects of a security breach can is assigned an encryption key, and that key is
be contained to the part of the system that was used to prove the user's identity. Encryption-
broken. based authentication is not without its own
problems. Principals (users and servers) must
Security has three aspects: authentication, maintain a key for use with every other prin-
how the system veries a user's identity; au- cipal with which they might possibly commu-
thorization, how it decides whether a user is nicate. This is impractical in large systems.
allowed to perform the requested operation; Altogether, (n x m) keys are required where n
and accounting, how it records what the user is the number of users, and m the number of
has done, and how it makes sure that a user servers.
does not use excessive resources. Account-
ing can include mechanisms to bill the user In [17] Needham and Schroeder show how the
for the resources used. Many systems imple- number of keys to be maintained can be re-
ment a distributed mechanism for authentica- duced through the use of an authentication
tion, but leave authorization to the individual server (AS). An AS securely generates keys as
server. Few systems provide for accounting in they are needed and distributes them to the
a distributed manner. parties wishing to communicate. Each party
7
the message was sent by a principal who knew
K the session key, and that the session key was
only issued to the principal named in the ticket.
This authenticates the client. If the client
requires authentication from the server, the
1 server adds one to the timestamp, re-encrypts
2 it using the session key and returns it to the
client (4).
3
As a system scales, it becomes less practical
C S for an authentication server to share keys with
4 every client and server. Additionally, it be-
comes less likely that everyone will trust a sin-
Figure 1: Kerberos Authentication Protocol gle entity. Kerberos allows the registration
of principals to be distributed across multi-
ple realms. The distribution mechanism is de-
shares a key (or key pair) with the AS. scribed in Section 8.
Authentication in Kerberos [29] is based on The Kerberos authentication protocol is based
a modied version of the Needham and on conventional cryptography, but authentica-
Schroeder protocol (Figure 1). When a client tion can also be accomplished using public-key
wishes to communicate with a server it con- cryptography. In public-key cryptography, sep-
tacts the AS, sending its own name and the arate keys are used for encryption and decryp-
name of the server to be contacted (1). The tion, and the key distribution step of authenti-
AS randomly generates a session key and re- cation can be accomplished by publishing each
turns it to the client encrypted in the key that principal's public key. When issues such as re-
the client registered with the AS (2). Accom- vocation are considered, authentication proto-
panying the encrypted session key is a ticket cols based on public key cryptography make
that contains the name of the client and the dierent tradeos, but provide little reduction
session key, all encrypted in the key that the in complexity. Authentication based on public
server registered with the AS. key cryptography does, however, make a sig-
In Kerberos the session key and ticket received nicant dierence when authenticating a single
from the AS are valid for a limited time and message to multiple recipients.
are cached by the client, reducing the number
of requests to the AS. Additionally, the user's 5.2 Authorization
secret key is only needed when initially logging
in. Subsequent requests during the same login
session use a session key returned by the AS in There are a number of ways distributed sys-
response to the initial request. tems approach authorization. In one, a request
is sent to an authorization service whenever a
To prove its identity to the server, the client server needs to make an access control decision.
forwards the ticket together with a timestamp The authorization service makes the decision
encrypted in the session key from the ticket and sends its answer back to the server. This
(3). The server decrypts the ticket and uses approach allows the access control decision to
the session key contained therein to decrypt take into account factors such as recent use of
the timestamp. If recent, the server knows that other servers, global quotas, etc. The disad-
8
vantage is that it can be cumbersome and the recursively checks the group for membership by
access control service becomes a bottleneck. the individual. If necessary, recursive queries
can be sent to other name servers. One of
In a second approach the client is rst authen- the most noticeable bottlenecks in Grapevine
ticated, then the server makes its own decision was the time required to check membership
about whether the client is authorized to per- in large groups, especially when other name
form an operation. The server knows the most servers were involved. [27]
about the request and is in the best position to
decide whether it should be allowed. For exam- External information can be made available to
ple, in the Andrew le system [12] each direc- a server without the need for it to contact an-
tory has an associated list, known as an access other service. The client can request crypto-
control list (ACL), identifying the users autho- graphically sealed credentials either authoriz-
rized to access the les within the directory. ing its access to a particular object or verifying
When access to a le is requested, the client's its membership in a particular group. These
name is compared with those in the ACL. credentials can be passed to the server in a
manner similar to that for the capability-based
ACL entries in Andrew can contain the names approach described next. The dierence from
of groups. The use of groups allow rights to capabilities is that these credentials might only
be granted to named collections of individu- be usable by a particular user, or they might
als without the need to update multiple ACLs require further proof that they were really is-
each time membership in the group changes. sued to the user presenting them. Version 5 of
Each Andrew le server maintains the list of Kerberos supports such credentials. Their use
the groups to which each user belongs and that is described separately in [19].
list is consulted before checking the ACL.
The server making an authorization decision 5.2.1 Capabilities
should be provided with as much information
as possible. For example, if authentication re-
quired the participation of more than one AS, The approaches discussed so far have been
the names of the AS's that took part should based on an access control list model for autho-
be available. It should also be possible for the rization. A disadvantage of this model is that
server to use external sources to obtain infor- the client must rst be authenticated, then
mation such as group membership. This ap- looked up in a potentially long list, the lookup
proach, used in Grapevine, is similar to using may involve the recursive expansion of multi-
an authorization service. It diers in that not ple groups, and interaction may be required
all requests require information from the group with other servers. The advantages of the ac-
server, and the nal decision is left to the end cess control list model are that it leaves the
server. nal decision with the server itself, and that it
is straightforward to revoke access should that
Like Andrew, authorization in Grapevine is be required.
based on membership in ACLs. ACLs con-
tain individuals or groups that themselves con- Amoeba [30] uses the capability model for au-
tain individuals or groups. Group membership thorization. In the capability model, the user
is determined by sending to a name server a maintains the list of the objects for which ac-
query containing the name of the individual cess is authorized. Each object is represented
and the name of the group. The name server by a capability which, when presented to a
server, grants the bearer access to the ob-
9
ject. To prevent users from forging capabilities, 5.3 Accounting
Amoeba includes a random bit pattern. By
choosing the bit pattern from a sparse enough
address space, it is suciently dicult for a
user to create its own capability. A client Most distributed systems handle accounting on
presents its capability when it wishes to ac- a host-by-host basis. There is a need for dis-
cess an object. The server then compares the tributed, secure, and scalable accounting mech-
bit pattern of the capability with that stored anism, especially in large systems that cross
along with the object, and if they match, the administrative boundaries. To date, few sys-
access is allowed. tems have even considered the problem. The
The advantage of the capability model is that, diculty lies in the inability to trust servers
once contacted by the client, the server can run by unknown individuals or organizations.
make its access control decision without con- The bank server [16] and accounting based on
tacting other servers. Yet, the server does proxies [19] are among the few approaches that
not need to maintain a large authorization have been described.
database that would be dicult to keep up- In Amoeba, accounting is handled by bank
to-date in a large system. A disadvantage is servers which maintain accounts on behalf of
that capabilities can only be revoked en masse. users and servers. Users transfer money to
Capabilities are revoked by changing the bit servers, which then draw upon the balance as
pattern, but this causes all outstanding capa- resources are used. Proxy-based accounting is
bilities for that object to be immediately inval- tied much closer to authentication and autho-
idated. The new capability must then be reis- rization. The client grants the server a proxy
sued to all legitimate users. In a large system, allowing the server to transfer funds from the
this might be a signicant task. client's account.
Authorization in capability-based distributed Both approaches require support for multiple
systems is still dependent on authentication currencies. This is important as systems span
and related mechanisms. Authentication is re- international boundaries, or as the account-
quired when a user logs in to the system before ing service is called on to maintain information
the user is granted an initial capability that can about dierent types of resources. The curren-
be used to obtain other capabilities from a di- cies can represent the actual funds for which
rectory service. Additionally, as was the case clients can be billed, or they can represent lim-
with passwords, capabilities can be easily in- its on the use of resources such as printer pages
tercepted when they are presented to a server or CPU cycles. Quotas for reusable resources
over the network. Thus, they can not simply (such as disk pages) can be represented as a
be sent in the clear. Instead, they must be deposit which is refunded when the resource is
sent encrypted, together with sucient infor- released.
mation to prevent replay. This mechanism is
quite similar to that used for encryption-based Authorization and accounting depend on one
authentication. another. In one direction, the transfer of funds
requires the authorization of the owner of the
account from which funds will be taken. In
the other, a server might verify that the client
has sucient funds (or quota) to pay for an
operation before it will be performed.
10
5.4 On Replication, Distribution scalability problems from the naming and se-
and Caching curity mechanisms they use. For example, one
can't access a resource without rst nding it.
This section has described the problems spe- This involves both identifying the resource that
cic to scaling the security subsystems of large is needed and determining its location given its
systems and has discussed the mechanisms name. Once a resource has been found, authen-
used to solve them. Many of problems that tication and authorization might be required
we saw with naming also arise with security. for its use.
As with naming, replication, distribution, and These services sometimes have scalability prob-
caching are often used. When applying these lems of their own, and similar techniques are
techniques in the security area, a few consider- employed to solve them. Problems of load and
ations must be kept in mind. reliability are often addressed through replica-
When replicating a server that maintains secret tion, distribution, and caching. Some services
keys, the compromise of any replica can result further reduce load by by shifting as much com-
in the compromise of important keys. The se- putation to the client as possible; however, this
curity of the service is that of the weakest of should only be done when all the information
all replicas. When distribution is used, mul- needed for the computation is readily accessi-
tiple servers may be involved in a particular ble to the client.
exchange. It is important that both principals The services used to access remote resources
know which servers were involved so that they are very dependent on the underlying commu-
can correctly decide how much trust to place nications mechanisms they employ. This sec-
in the results. Finally, the longer one allows tion will look at the scaling issues related to
credentials to be cached, the longer it will take network communication in such services. To
to recover when a key is compromised. provide an example of the problems that arise
As a system grows, less trust can be placed when supporting access to remote resources, it
in its component pieces. For this reason, will then look at the eect of scale on a heavily
encryption-based security mechanisms are the used resource, the network le system.
appropriate choice for large distributed sys-
tems. Even encryption-based mechanisms rely
on trust of certain pieces of a system. By mak- 6.1 Communication
ing it clear which pieces need to be trusted,
end services are better able to decide when a As a system grows geographically, the medium
request is authentic. of communications places limits on the sys-
tem's performance. These limits must be con-
sidered when deciding how best to access a
6 Remote Resources remote resource. Approaches which might
seem reasonable given a low latency connec-
tion might not be reasonable across a satellite
Naming and security are not the only parts of link.
the system aected by scale. Scale aects the
sharing of many kinds of resources. Among Because they can greatly aect the usability of
them are processors, memory, storage, pro- a system, the underlying communications pa-
grams, and physical devices. The services that rameters must not be completely hidden from
provide access to these resources often inherit the application. The Dash system [1] does a
11
good job at exposing the communication pa- in scalable systems, broadcast need not be
rameters in an appropriate manner. When a ruled out entirely. Amoeba [30] uses broad-
connection is established, it is possible for the cast on its subnets to improve the performance
application to require that the connection meet of local operations. Communications beyond
certain requirements. If the requirements are the local subnet uses point-to-point communi-
not met, an error is returned. When one set cation.
of required communication parameters can not
be met, it might still be possible for the appli- Multicast, a broadcast-like mechanism, can
cation to access the resource via an alternate also be used. In multicast, a single message can
mechanism; e.g., whole le caching instead of be sent to a group of servers. This reduces the
remote reads and writes. number of messages required to transmit the
same message to multiple recipients. For mul-
Communication typically takes one of two ticast to scale, the groups to which messages
forms: point-to-point or broadcast. In point- are sent should be kept small (only those re-
to-point communication the client sends mes- cipients that need to receive a message). Addi-
sages to the particular server that can satisfy tionally, the network should only transmit mul-
the request. If the contacted server can not ticast message across those subnets necessary
satisfy the request, it might respond with the to reach the intended recipients.
identity of a server that can. With broadcast,
the client sends the message to everyone, and
only those servers that can satisfy the request 6.2 File Systems
respond.
The advantage of broadcast is that it is easy The le system provides an excellent example
to nd a server that can handle a request; of a service aected by scale. It is heavily used,
just send the request and the correct server and it requires the transfer of large amounts of
responds. Unfortunately, broadcast does not data.
scale well. Preliminary processing is required In a global le system, distribution is the rst
by all servers whether or not they can handle a line of defense against overloading le servers.
request. As the total number of requests grows, Files are spread across many servers, and each
the load due to preliminary processing on each server only processes requests for the les that
server will also grow. it stores. Mechanisms used to nd the server
The use of global broadcast also limits the storing a le given the le's name are described
scalability of computer networks. Computer in Section 8. In most distributed systems, les
networks improve their aggregate throughput are assigned to servers based on a prex of the
by distributing network trac across multiple le name. For example, on a system where the
subnets. Only those messages that need to pass names of binaries start with \/bin", it is likely
through a subnet to reach their destination are that such les will be assigned to a common
transmitted on a particular subnet. Local com- server. Unfortunately, since binaries are more
munications in one part of a network is not seen frequently referenced than les in other parts of
by users in another. When messages are broad- the le system, such an assignment might not
cast globally, they are transmitted on all sub- evenly distribute requests across le servers.
nets, consuming available bandwidth on each. Requests can also be spread across le servers
Although global broadcast should be avoided through the use of replication. Files are as-
signed to multiple servers, and clients contact
12
a subset of the servers when making requests. 7 Replication
The diculty with replication lies in keeping
the replicas consistent. Techniques for doing so Replication is an important tool for building
are described in Section 7. Since binaries rarely scalable distributed systems. Its use in nam-
change, manual techniques are often sucient ing, authentication, and le services reduces
for keeping their replicas consistent. the load on individual servers and improves the
Caching is extremely important in network le reliability and availability of the services as a
systems. A local cache of le blocks can be whole. The issues of importance for replica-
used to make network delays less noticeable. A tion are the placement of the replicas and the
le can be read over the network a block at a mechanisms by which they are kept consistent.
time, and access to data within that block can
be made locally. Caching signicantly reduces
the number of requests sent to a le server, es- 7.1 Placement of Replicas
pecially for applications that read a le several
bytes at a time. The primary diculty with The placement of replicas in a distributed sys-
caching lies in making sure the cached data is tem depends on the purpose for replicating the
correct. In a le system, a problem arises if resource. If a service is being replicated to im-
a le is modied while other systems have the prove the availability of the service in the face
le, or parts of the le, in their cache. Mecha- of network partitions, or if it is being replicated
nisms to maintain the consistency of caches are to reduce the network delays when the service
described in Section 9. is accessed, then the replicas should be scat-
tered across the system. Replicas should be lo-
An issue of importance when caching les is the cated so that a network partition will not make
size of the chunks to be cached. Most systems the service unavailable to a signicant number
cache pieces of les. This is appropriate when of users.
only parts of a le are read. Coda [26] and
early versions of the Andrew File System [12] If the majority of users are local, and if the
support whole le caching, in which the en- service is being replicated to improve the relia-
tire le is transferred to the client's worksta- bility of the service, to improve its availability
tion when opened. Files that are modied are in the face of server failure, or to spread the
copied back when closed. Files remain cached load across multiple servers, then replicas may
on the workstation between opens so that a be placed near one another. The placement
subsequent open does not require the le to of replicas aects the choice of the mechanism
be fetched again. Approaches such as whole that maintains the consistency of replicas.
le caching work well on networks with high
latency, and this is important in a geographi-
cally large system. But, whole le caching can 7.2 Consistency
be expensive if an application only wants to
access a small part of a large le. Another A replicated object can logically be thought of
problem is that it is dicult for diskless work- as a single object. If a change is made to the
stations to support whole le caching for large object, the change should be visible to every-
les. Because of the range in capabilities of the one. At a particular point in time, a set of
computers and communication channels that replicas is said to be consistent if the value of
make up a distributed system, multiple le ac- the object is the same for all readers. The fol-
cess mechanisms should be supported. lowing are some of the approaches that have
13
been used to maintain the consistency of repli- the primary-site approach is that the availabil-
cas in distributed systems. ity of updates still depends on a single server,
though some systems select a new primary site
Some systems only support replication of read- if the existing primary goes down. An addi-
only information. Andrew and Athena take tional disadvantage applies if changes are dis-
this approach for replicating system binaries. tributed periodically: the updates are delayed
Because such les change infrequently, and be- until the next update cycle.
cause they can't be changed by normal users,
external mechanisms are used to keep the repli- For some applications, absolute consistency is
cas consistent. often not an overriding concern. Some de-
lay in propagating a change is often accept-
Closely related to the read-only approach is able, especially if one can tell when a response
replication of immutable information. This ap- is incorrect. This observation was exploited
proach is used by the Amoeba le server. Files by Grapevine, allowing it to guarantee only
in Amoeba are immutable, and as a result, they loose consistency. With loose consistency, it
can be replicated at will. Changes to les are is guaranteed that replicas will eventually con-
made by creating new les, then changing the tain identical data. Updates are allowed even
directory so that the new version of the le will when the network is partitioned or servers are
be found. down. Updates are sent to any replica, and
A less restrictive alternative is to allow up- that replica forwards the update to the others
dates, but to require that updates are sent to as they become available. If con
icting updates
all replicas. The limitations of this approach are received by dierent replicas in dierent or-
are that updates can only take place when all ders, timestamps indicate the order in which
of the replicas are available, thus reducing the they are to be applied. The disadvantage of
availability of the system for write operations. loose consistency is that there is no guaran-
This mechanism also requires an absolute or- tee that a query returns the most recent data.
dering on updates so that inconsistencies do With name servers, however, it is often possi-
not result if updates are received by replicas ble to check whether the response was correct
in dierent orders. A nal diculty is that a at the time it is used.
client might fail during an update, resulting in Maintaining a consistent view of replicated
its receipt by only some of the replicas. data does not require that all replicas are up-
In primary-site replication, all updates are di- to-date. It only requires that the up-to-date
rected to a primary replica which then forwards information is always visible to the users of
the updates to the others. Updates may be the data. In the mechanisms described so far,
forwarded individually, as in Echo [11], or the updates eventually make it to every replica.
whole database might be periodically down- In quorum-consensus, or voting [9], updates
loaded by the replicas as in Kerberos [29] and may be sent to a subset replicas. A consis-
the Berkeley Internet Domain Naming system tent view is maintained by requiring that all
(BIND) [31], an implementation of IDNS [15]. reads are directed to at least one replica that is
The advantage of the primary-site approach is up-to-date. This is accomplished by assigning
that the ordering of updates is determined by votes to each replica, by selecting two num-
the order in which they are received at the pri- bers, a read-quorum and write-quorum, such
mary site, and updates only require the avail- that the read-quorum plus the write-quorum
ability of the primary site. A disadvantage of exceeds the total number of votes, and by re-
quiring that reads and writes are directed to a
14
sucient number of replicas to collect enough assignments improve reliability, since it is less
votes to satisfy the quorum. This guarantees likely that a network partition will make a lo-
that the set of replicas read will intersect with cal server inaccessible. In any case, it is desir-
the set written during the most recent update. able to avoid the need to contact a name server
Timestamps or version numbers stored with across the country in order to nd a resource
each replica allow the client to determine which in the next room.
data is most recent.
By assigning information is to servers along ad-
ministrative lines, an organization can avoid
8 Distribution dependence on others. When distributed along
organizational lines, objects maintained by an
organization are often said to be within a par-
Distribution allows the information maintained ticular domain (IDNS), or a cell (Andrew).
by a distributed service to be spread across Kerberos uses the term realm to describe the
multiple servers. This is important for several unit of distribution when there exists an ex-
reasons: there may be too much information plicit trust relationship between the server and
to t on a single server; it reduces the number the principals assigned to it.
of requests to be handed by each server; it al-
lows administration of parts of a service to be
assigned to dierent individuals; and it allows 8.2 Finding the Right Server
information that is used more frequently in one
part of a network to be maintained nearby. The diculty with distribution lies in the dis-
tribution function: the client must determine
This section will describe the use of distribu- which server contains the requested informa-
tion in naming, authentication, and le ser- tion. Hierarchical name spaces make the task
vices. Some of the issues of importance for easier since names with common prexes are
distribution are the placement of the servers often stored together2, but it is still necessary
and the mechanisms by which the client nds to identify the server maintaining that part of
the server with the desired information. the name space. The methods most frequently
used are mounts, broadcast and domain-based
8.1 Placement of Servers queries.
Sun's Network File System [25], Locus [32],
Distributed systems exhibit locality. Certain and Plan 9 [24] use a mount table to identify
pieces of information are more likely to be ac- the server on which a a named object resides.
cessed by users in one part of a network than The system maintains a table mapping name
by users in another. Information should be prexes to servers. When an object is refer-
distributed to servers that are near the users enced, the name is looked up in the mount ta-
that will most frequently access the informa- ble, and the request is forwarded to the appro-
tion. For example, a user's les could be as- priate server. In NFS, the table can be dier-
signed to a le server on same subnet as the ent on dierent systems meaning that the same
workstation usually used by that user. Simi- name might refer to dierent objects on dier-
larly, the names maintained by name servers
can be assigned so that names for nearby ob-
2
In this discussion, prex means the most signicant
part of the name. For le names, or for names in DEC's
jects can be obtained from local name servers. Global Naming System, it is the prex. For domain
In addition to reducing network trac, such names it is really the sux.
15
client servers
ent systems. Locus supports a uniform name lookup a.isi.edu
space by keeping the mount table the same on
all systems. In Plan 9, the table is maintained uw.edu
on a per-process basis. time edu is 192.67.67.53
Broadcast is used by Sprite [22] to identify the lookup a.isi.edu
server on which a particular le can be found.
The client broadcasts a request, and the server edu
with the le replies. The reply includes the pre- isi.edu is 128.9.0.32
x for the les maintained by the server. This lookup a.isi.edu
prex is cached so that subsequent requests for
les with the same prex can be sent directly isi.edu
to that server. As discussed in Section 6.1, this
approach does not scale beyond a local net- a.isi.edu is 128.9.0.107
work. In fact, most of the systems that use
this approach provide a secondary name reso-
lution mechanism to be used when a broadcast Figure 2: Resolving a domain-based name
goes unanswered.
Distribution in Grapevine, IDNS, and X.500 mation redirecting the query to another name
[5] is domain-based. Like the other tech- server. With the two level name space sup-
niques described, the distribution function in ported by Grapevine, only two queries are re-
domain-based naming is based on a prex of quired: one to nd the server for a given reg-
the name to be resolved. Names are divided istry, and one to resolve the name. The server
into multiple components. One component for a given registry is found by looking up the
species the name to be resolved by a par- name in the gv registry which is replicated on
ticular name server and the others specify the every Grapevine server.
server that is to resolve the name. For exam- The resolution of a name with a variable num-
ple, names in Grapevine consist of a registry ber of components is shown in gure 2. The
and a name within the registry. A name of client sends a request to its local server request-
the form neuman.uw would be stored in the ing resolution of the host name a.isi.edu.
uw registry under the name neuman. IDNS That server returns the name and address of
and DEC's Global Naming System both sup- the edu server. The client repeats its request
port variable depth names. In these systems, to the edu server which responds with the
the point at which the name and the domain name and address for the isi.edu server. The
are separated can vary. In IDNS, the last com- process repeats, with successively longer pre-
ponents of the name specify the domain, and xes, until a server (in this case isi.edu) re-
the rst components specify the name within turns the address for the requested host. The
that domain. For example, venera.isi.edu client caches intermediate responses mapping
is registered in the name server for the isi.edu prexes to servers so that subsequent requests
domain. can be handled with fewer messages.
To nd a name server containing information Domain-based distribution of names scales
for a given domain or registry, a client sends well. As the system grows and queries be-
a request to the local name server. The local come more frequent, additional replicas of fre-
name server sends back an answer, or infor- quently queried registries or domains can be
16
added. Grapevine's two level name space, client
though, places limits on scalability. Since ev- 1
ery name server must maintain the gv registry, K
and because the size of this registry grows lin- isi.edu
early with the total number of name servers, time 2
the total number of name servers that can be 3+1
supported is limited. Clearinghouse, a produc-
tion version of Grapevine, addressed this prob- K
lem by supporting a three level name space. edu
This allows the name service to scale to a larger 2T
number of names, but it still eventually reaches 3+1
a limit due to the size of the root or second-level K
registries. mit.edu
20
11 Building Scalable Systems spreads the load across the servers reducing the
number of requests that are handled by each.
This section presents suggestions for building
scalable systems. These suggestions are dis- Distribute evenly. The greatest impact
cussed in greater detail in the paper and are on scalability will be felt if requests can be
presented here in a form that can be used as a distributed to servers in proportion to their
guide. The hints are broken into groups corre- power. With an uneven distribution, one server
sponding to the primary techniques of replica- may be idle while others are overloaded.
tion, distribution and caching. Exploit locality. Network trac and latency
When building systems it is important to con- can be reduced if data are assigned to servers
sider factors other than scalability. An excel- close to the location from which they are most
lent collection of hints on the general design frequently used. The Internet Domain Naming
of computer systems is presented by Lampson System does this. Each site maintains the in-
in [13]. formation for its own hosts in its own servers.
Most queries to a name server are for local
hosts. As a result, most queries never leave
11.1 Replication the local network.
Replicate important resources. Replica- Bypass upper levels of hierarchies. In hi-
tion increases availability and allows requests erarchically organized systems, just about ev-
to be spread across multiple servers, thus re- eryone needs information from the root. If
ducing the load on each. cached copies are available from subordinate
Distribute the replicas. Placing replicas in servers,
some
the upper levels can be bypassed. In
cases, it might be desirable for a server to
dierent parts of the network improves avail- answer queries
ability during network partitions. By placing dinates, and toonlylet
from its immediate subor-
the subordinates make the
at least one replica in any area with frequent responses available to their subordinates.
requests, those requests can be directed to a
local replica reducing the load on the network
and minimizing response time. 11.3 Caching
Use loose consistency. Absolute consis- Cache frequently accessed data. Caching
tency doesn't scale well. By using loose consis- decreases the load on servers and the net-
tency the cost of updates can be reduced, while work. Cached information can be accessed
changes are guaranteed to eventually make it to more quickly than if a new request is made.
each replica. In systems that use loose consis-
tency it is desirable to be able to detect out- Consider access patterns when caching.
of-date information at the time it is used. The amount of data normally referenced to-
gether, the ratio of reads to writes, the likeli-
11.2 Distribution hood of con
icts, the number of simultaneous
users, and other factors will aect the choice of
Distribute across multiple servers. Dis- caching mechanisms. For example, if les are
tributing data across multiple servers decreases normally read from start to nish, caching the
the size of the database that must be main- entire le might be more ecient than caching
tained by each server, reducing the time needed blocks. If con
icts between readers and writ-
to search the database. Distribution also ers are rare, using callbacks to maintain con-
21
sistency might reduce requests. The ability to Upper levels rarely change.
detect invalid data on use allows cached data
to be used until such a condition is detected.
11.4 General
Cache timeout. By associating a time-to-live Shed load, but not too much. When com-
(TTL) with cached data an upper bound can
be placed on the time required for changes to putation can be done as easily by the client as
be observed. This is useful when only even- the server, it is often best to leave it to the
tual consistency is required, or as a backup to client. However, if allowing the client to per-
other cache consistency mechanisms. The TTL form the computation requires the return of
should be chosen by the server holding the au- a signicantly greater amount of information
thoritative copy. If a change is expected, the (as might be the case for a database query),
TTL can be decreased accordingly. it is more appropriate for the server to do the
computation. Additionally, if the result can
Cache at multiple levels. Additional lev- be cached by the server, and later provided to
els of caching often reduce the number of re- others, it is appropriate to do the computation
quests to the next level. For example, if a on the server, especially if the computation re-
name server handling requests for a local net- quires contacting additional servers.
work caches information from the root name
servers, it can request it once, then answer local Avoid global broadcast. Broadcast does
requests for that information instead of requir- not scale well. It requires all systems to pro-
ing each client to request it separately. Simi- cess a message whether or not they need to.
larly, caching on le servers allows a block to Multicast is acceptable, but groups should in-
be read (and cached) by multiple clients, but clude only those servers that need to receive
only requires one disk access. the message.
Look rst locally. By looking rst for nearby Support multiple access mechanisms.
Applications place varying requirements on ac-
copies of data before contacting central servers,
the load on central servers can be reduced. For cess mechanisms. What is best for one appli-
example, if a name is not available from a cache cation might not be so for another. Chang-
in the local system, contact a name server on ing communication parameters can also aect
the local network before contacting a distant the choice of mechanism. Multiple mechanisms
name server. Even if it is not the authority should be supported when accessing objects
for the name to be resolved, the local name and resources. The client should choose the
server may possess information allowing the method based on the prevailing conditions.
root name server to be bypassed. Keep the user in mind. Many mechanisms
The more extensively something is are used to help the system deal with scale.
shared, the less frequently it should be The mechanisms that are used should not make
changed. When an extensively shared object the system more dicult to understand. Even
is changed, a large number of cached copies be- with a familiar system model, the number of
come invalid, and each must be refreshed. A available objects and resources can overwhelm
system should be organized so that extensively the user. Large systems require mechanisms
shared data is relatively stable. A hierarchi- that reduce the amount of information to be
cal name space exhibits this property. Most processed and remembered by the user. These
changes occur at the leaves of the hierarchy. mechanisms should not hide information that
might be of interest.
22
11.5 Evaluating Scalable Systems 12 Conclusions
This paper examined the problems that arise as
There are many questions to be asked when systems scale. It has used examples from many
evaluating the scalability of a distributed sys- systems to demonstrate the problems and their
tem. This subsection lists some of the ques- solutions. The systems mentioned are not the
tions that are important. It does not provide only systems for which scale was a factor in
a formula that yields a number. In fact, dier- their design; they simply provided the most
ent systems scale in dierent ways. One sys- readily available examples for the mechanisms
tem may scale better administratively, while that were discussed. The discussion has neces-
another scales better numerically. There are sarily taken a narrow view of the systems that
so many unknowns that aect scaling that ex- were discussed, examining individual subsys-
perience is often the only true test of a system's tems instead of the systems as a whole. The
ability to scale. eects of scale, however, are felt throughout
the system.
The rst set of questions concerns the use of the
system. How will the frequency of queries grow This paper has shown how scale aects large
as the system grows? What percentage of those systems. Scale can be broken into its numeri-
queries must be handled by central servers? cal, geographical, and administrative compo-
How many replicas of the central servers are nents. Each component introduces its own
there, is this enough, can more be added, what problems, and the solutions employed by a
problems are introduced by doing so, and are number of systems were discussed. The three
there any bottlenecks? techniques used repeatedly to handle scale are
replication, distribution, and caching.
The next set of questions concerns the data
that must be maintained. How does the size of A collection of suggestions for designing scal-
the databases handled by the individual servers able systems was presented in Section 11.
grow? How does this aect query time? How These suggestions expand upon the three pri-
often will information change? What update mary techniques and suggest additional ways
mechanism is used, and how does it scale? How in which they can be applied. It is hoped that
will an update aect the frequency of queries? these hints will help system designers address
Will caches be invalidated, and will this result scale in the design of future distributed sys-
in a sudden increase in requests as caches are tems.
refreshed?
The nal question concerns the administrative Acknowledgments
component of scale. Many systems require a
single authority that makes nal decisions con- I would like to thank Brian Bershad, Robert
cerning the system. Is this required, and is it Cooper, Barbara Gordon, Bruce Gordon, Terry
practical in the environment for which the sys- Gray, Andrew Herbert, Richard Ladner, Ed
tem will be used? Lazowska, Hank Levy, Mary Ann G. Neuman,
David Notkin, John Zahorjan, and the anony-
Asking these questions will point out some of mous referees who commented on earlier drafts
the problem areas in a system. This is not a of this paper.
complete list. It is entirely possible that impor-
tant factors not addressed will cause a system
to stop scaling even earlier.
23
Appendix: Systems Designed with Scale in Mind
Scalability is included among the design criteria of a number of recent systems. The degree
to which these systems scale ranges from a collection of computers on a local area network,
to computers distributed across the entire Internet. This appendix describes some of these
systems, states the degree to which each system is intended to scale, and lists some of the ways
in which the system addresses the problems of scale. Table 1 summarizes this information in
tabular form.
Amoeba, developed at Vrije Universiteit and DEC's Global Naming System, developed
CWI in Amsterdam, is a capability-based dis- at at DEC's Systems Research Center, was de-
tributed operating system which has been used signed to support naming in large networks
across long haul networks spanning multiple spanning multiple organizations. It is notable
organizations. Objects are referenced by capa- for the attention paid to reorganization of the
bilities which include identiers for the server name space as independent name spaces are
and object, and access rights for the object. merged, or as the external relationship between
The capabilities provide both a distributed organizations change (e.g. mergers or acquisi-
naming and authorization mechanism. [16, 30] tions). Echo is a distributed le system sup-
porting consistent replication of local parti-
The Andrew system, developed at Carnegie- tions, but with partitions tied together using
Mellon University, runs on thousands of com- the loosely consistent Global Naming System.
puters distributed across the university cam- DEC's Global Authentication System is
pus. Its most notable component is the An- notable for the fact that a principal's name is
drew File System which now ties together not absolute, but is instead determined by the
le systems at sites distributed across the sequence of authentication servers used to au-
United States. Coda is a follow-on to Andrew, thenticate the principal. [2, 11, 14]
improving availability, especially in the face of
network partitions. [12, 26] Grapevine was one of the earliest distributed
systems designed to scale to a large network.
MIT's Project Athena is a system built from It was developed at Xerox PARC to support
thousands of computers distributed across electronic mail, to provide a name service for
campus. Distributed services provide authen- the location of network services, and to sup-
tication, naming, ling, printing, mail and ad- port simple password-based authentication on
ministrative functions. Kerberos was devel- a world-wide network connecting Xerox sites.
oped as part of Project Athena. [6] [3, 27]
Dash, under development at Berkeley, is a The Heterogeneous Computer Systems
distributed operating system designed for use Project at the University of Washington
across large networks exhibiting a range of demonstrated that a single interface could be
transmission characteristics. Dash is notable used to communicate with systems using dif-
for exposing these characteristics by allowing ferent underlying protocols and data represen-
the application to require that the connection tations. This is important for large systems
meet certain requirements and returning an er- when it is not practical to dictate the choice of
ror if those requirements cannot be met. [1] hardware and software across multiple sites, or
when the underlying mechanisms have dierent
strengths and weaknesses. [21]
24
The Internet Domain Naming System the object(s) matching that information. [23]
(IDNS) is a distributed name service, run-
ning on the Internet, supporting the transla- Prospero, developed at the University of
tion of host names to Internet addresses and Washington, runs on systems distributed
mail forwarders. Each organization maintains across the Internet. It supports an object-
replicated servers supporting the translation centered view of the entire system, allowing
of names for its own part of the name space. users to dene their own virtual system by
[15, 31] specifying the pieces of the global system that
are of interest. Prospero's support for closure
Kerberos is an encryption-based network resolves the problems caused by the use of mul-
authentication system, developed by MIT's tiple name spaces. [20]
Project Athena, which supports authentication
of users both locally, and across organizational QuickSilver, developed at IBM's Almaden
boundaries. [29] Research Center, is notable for its proposed
use of a user-centered3 name space. In a sys-
Locus, developed at the UCLA, was designed tem spanning a large, multi-national corpora-
to run on systems distributed across a local- tion, such a name space allows users to see only
area network. Locus is notable as one of the those parts of the system that concern them.
earliest distributed systems to support a uni- [4]
form view of the le system across all nodes in
the system. [32] Sprite, a network operating system developed
at Berkeley, was designed for use across a local
SUN's Network File System supports trans- area network. Its le system is notable for its
parent access to les stored on remote hosts. use of caching on both the client and the server
Files are named independently on each host. to improve performance, and for its use of pre-
Before a remote le can be accessed, the remote x tables to distribute requests to the correct
le system containing the le must be mounted le server. [22]
on the local system, establishing a mapping of
part of the local le name space to les on the The Tilde naming system, developed at Pur-
remote system. The NFS server maintains very due, supports process-centered 3 naming. This
little information (state) about the clients that allows one to specify, on a per-process basis,
use it. [25] how names will map to pieces of the global
system. This ability provides applications with
Plan 9 from Bell Labs, intended for use by a the advantages of a global name space for those
large corporation, supports a process-centered3 le names that should be resolved globally,
name space, allowing users to incorporate into while allowing parts of the name space to be
their name space those parts of the global sys- specied locally for le names which would be
tem that are useful. [24] better resolved to local les. [7]
Prole, developed at the University of Ari- X.500 is an ISO standard describing a dis-
zona, is an attribute-based name service that tributed directory service that is designed to
maps possibly incomplete information about store information about users, organizations,
coarse-grained objects on a large network to resources, and similar entities worldwide. Scal-
ability is addressed in largely the same manner
3 Perhaps better described as process- or user- as in the Internet Domain Name Service. [5]
25
Intended Environment The Methods Used
System Service # nodes geographic administrative replication distribution caching
Amoeba general 1 wide-area multiple organizations immutable capabilities yes
Andrew le system 10,000 wide-area multiple organizations read-only cell/volume blocks
Athena general 10,000 campus university service clusters yes
Coda le system 10,000 global multiple organizations optimistic volume whole le
Dash general 1 wide-area multiple organizations yes yes yes
DEC's Global naming 1 global multiple organizations loose directories time-to-live
DEC's Global authentication 1 global multiple organizations loose directories -
Echo le system 1 wide-area multiple organizations loose/primary volume yes
Grapevine general 2,000 company multiple departments loose registry yes
HCS general - wide-area multiple organizations - yes -
IDNS naming 1 global multiple organizations primary domain yes
Kerberos authentication 1 global multiple organizations primary realm tickets
Locus general 100 local department primary mount yes
NFS le system - local single organization no mount blocks
Plan 9 general 10,000 company multiple depatments no mount no
Prole naming 1 wide-area multiple organizations information principal client-managed
Prospero naming 1 global multiple organizations yes uid yes
Quicksilver le system 10,000 company multiple departments no prex immutable
Sprite le system 100 local department read-only prex client&server
Tilde naming 100 local single organization no trees yes
X.500 naming 1 global multiple organizations yes yes yes
Table 1: Important distributed systems and the methods they use to handle scale
26
References thesis, University of Washington, Decem-
ber 1985. Department of Computer Science
[1] David P. Anderson and Domenico Ferrari. technical report 85-12-1.
The Dash project: An overview. Technical
Report 88/405, Computer Science Division, [9] David K. Giord. Weighted voting for repli-
Department of Electrical Engineering and cated data. In Proceedings of the 7th ACM
Computer Science, University of California Symposium on Operating System Princi-
at Berkeley, August 1988. ples, pages 150{159, December 1979. Pa-
cic Grove, California.
[2] Andrew D. Birrell, Butler W. Lamp-
son, Roger M. Needham, and Michael D. [10] Cary G. Gray and David R. Cheriton.
Schroeder. A global authentication service Leases: An ecient fault-tolerant mecha-
without global trust. In Proceedings of the nism for distributed le cache consistency.
IEEE Symposium on Security and Privacy, In Proceedings of the 12th ACM Symposium
pages 223{230, April 1986. on Operating Systems Principles, pages
202{210, December 1989.
[3] Andrew D. Birrell, Roy Levin, Roger M.
Needham, and Michael D. Schroeder. [11] Andy Hisgen, Andrew Birrell, Timothy
Grapevine: An exercise in distributed com- Mann, Michael Schroeder, and Garret
puting. Communications of the ACM, Swart. Availability and consistency trade-
25(4):260{274, April 1982. os in the Echo distributed le system. In
Proceedings of the 2nd IEEE Workshop on
[4] Luis-Felipe Cabrera and Jim Wyllie. Quick- Workstation Operating Systems, pages 49{
Silver distributed le services: An architec- 54, September 1989.
ture for horizontal growth. In Proceedings
of the 2nd IEEE Conference on Computer [12] John H. Howard, Michael L. Kazar,
Workstations, pages 23{27, March 1988. Sherri G. Menees, David A. Nichols,
Also IBM Research Report RJ 5578, April M. Satyanarayanan, Robert N. Side-
1987. botham, and Michael J. West. Scale and
performance in a distributed le system.
[5] CCITT. Recommendation X.500: The Di- ACM Transactions on Computer Systems,
rectory, December 1988. 6(1):51{81, February 1988.
[6] George A. Champine, Daniel E. Geer Jr., [13] Butler W. Lampson. Hints for computer
and William N. Ruh. Project athena as a system design. In Proceedings of the
distributed computer system. IEEE Com- 9th ACM Symposium on Operating System
puter, 23(9):40{51, September 1990. Principles, pages 33{48, 1983.
[7] Douglas Comer, Ralph E. Droms, and [14] Butler W. Lampson. Designing a global
Thomas P. Murtagh. An experimental name service. In Proceedings of the 4th
implementation of the Tilde naming sys- ACM Symposium on Principles of Dis-
tem. Computing Systems, 4(3):487{515, tributed Computing, August 1985.
Fall 1990.
[15] Paul Mockapetris. Domain names - con-
[8] Robert J. Fowler. Decentralized Object cepts and facilities. DARPA Internet RFC
Finding Using Forwarding Addresses. PhD 1034, November 1987.
27
[16] S. J. Mullender and A. S. Tanenbaum. The [25] R. Sandberg, D. Goldberg, S. Kleiman,
design of a capability-based distributed op- D. Walsh, and B. Lyon. Design and imple-
erating system. The Computer Journal, mentation of the Sun Network File System.
29(4):289{299, 1986. In Proceedings of the Summer 1985 Usenix
Conference, pages 119{130, June 1985.
[17] Roger M. Needham and Michael D.
Schroeder. Using encryption for authen- [26] Mahadev Satyanarayanan. Scalable, se-
tication in large networks of computers. cure, and highly available distributed le
Communication of the ACM, 21(12):993{ access. IEEE Computer, 23(5):9{21, May
999, December 1978. 1990.
[18] B. Cliord Neuman. Issues of scale in large [27] Michael D. Schroeder, Andrew D. Birrell,
and Roger M. Needham. Experience with
distributed operating systems. Generals Grapevine: The growth of a distributed
Report, Department of Computer Science, system. ACM Transactions on Computer
University of Washington, May 1988. Systems, 2(1):3{23, February 1984.
[19] B. Cliord Neuman. Proxy-based autho- [28] M. F. Schwartz. The networked resource
rization and accounting for distributed sys- discovery project. In Proceedings of the
tems. Technical Report 91-02-01, Depart- IFIP XI World Congress, pages 827{832,
ment of Computer Science and Engineer- August 1989. San Francisco.
ing, University of Washington, March 1991.
[29] J. G. Steiner, B. C. Neuman, and J. I.
[20] B. Cliord Neuman. The Prospero File Sys- Schiller. Kerberos: An authentication ser-
tem: A global le system based on the Vir- vice for open network systems. In Pro-
tual System Model. In Proceedings of the ceedings of the Winter 1988 Usenix Confer-
Workshop on File Systems, May 1992. ence, pages 191{201, February 1988. Dal-
las, Texas.
[21] David Notkin, Andrew P. Black, Edward D.
Lazowska, Henry M. Levy, Jan Sanislo, and [30] Andrew S. Tanenbaum, Robbert van Re-
John Zahorjan. Interconnecting heteroge- nesse, Hans van Staveren, Gregory J.
neous computer systems. Communications Sharp, Sape J. Mullender, Jack Jansen,
of the ACM, 31(3):258{273, March 1988. and Guido van Rossum. Experience with
the Amoeba distributed operating system.
[22] John K. Ousterhout, Andrew R. Cheren- Communications of the ACM, 33(12):47{
son, Frederick Douglis, Michael N. Nelson, 63, December 1990.
and Brent B. Welch. The Sprite network [31] Douglas B. Terry, Mark Painter, David W.
operating system. Computer, 21(2):23{35, Riggle, and Songnian Zhou. The Berkeley
February 1988. internet domain server. In Proceedings of
[23] Larry L. Peterson. The Prole naming ser- the 1984 Usenix Summer Conference, pages
vice. ACM Transactions on Computer Sys- 23{31, June 1984.
tems, 6(4):341{364, November 1988. [32] B. Walker, G. Popek, R. English, C. Kline,
and G. Thiel. The Locus distributed op-
[24] D. Presotto, R. Pike, K. Thompson, and erating system. In Proceedings of the 9th
H. Trickey. Plan 9: A distributed sys- ACM Symposium on Operating Systems
tem. In Proceedings of Spring 1991 Eu- Principles, pages 49{70, October 1983.
rOpen, May 1991.
28