0% found this document useful (0 votes)
70 views7 pages

Cooperative Cache Management in Mobile Ad Hoc Networks

The document describes a proposed cooperative caching scheme called zone cooperative (ZC) caching for mobile ad hoc networks. In ZC caching, mobile clients that are one-hop neighbors of a client form a cooperative cache zone, since communication costs are low within the zone. For a cache miss, the client first searches the caches of clients in its zone before forwarding the request along the routing path. Simulation results showed ZC caching achieved higher hit ratios and lower latency than other caching strategies for ad hoc networks.

Uploaded by

Ruy Lopez Closed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views7 pages

Cooperative Cache Management in Mobile Ad Hoc Networks

The document describes a proposed cooperative caching scheme called zone cooperative (ZC) caching for mobile ad hoc networks. In ZC caching, mobile clients that are one-hop neighbors of a client form a cooperative cache zone, since communication costs are low within the zone. For a cache miss, the client first searches the caches of clients in its zone before forwarding the request along the routing path. Simulation results showed ZC caching achieved higher hit ratios and lower latency than other caching strategies for ad hoc networks.

Uploaded by

Ruy Lopez Closed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

COOPERATIVE CACHE MANAGEMENT IN MOBILE AD HOC NETWORKS

Narottam Chand, R. C. Joshi and Manoj Misra


Indian Institute of Technology Roorkee, India
Email: {narotdec, joshifcc, manojfec}@iitr.ernet.in


ABSTRACT

Caching of frequently accessed data in ad hoc networks is
a potential technique that can improve the data access
performance and availability. Cooperative caching, which
allows the sharing and coordination of cached data among
clients, can further explore the potential of the caching
techniques. In this paper, we propose a novel scheme,
called zone cooperative (ZC) for caching in mobile ad hoc
networks. In ZC scheme, one-hop neighbors of a mobile
client form a cooperative cache zone since the cost for
communication with them is low both in terms of energy
consumption and message exchange. For a data miss in
the local cache, each client first searches the data in its
zone before forwarding the request to the next client that
lies along routing path towards server. As a part of cache
management, cache admission control and replacement
policy are developed to improve the data accessibility and
reduce the local cache miss ratio. Simulation experiments
show that the ZC caching mechanism achieves significant
improvements in terms of cache hit ratio and average
query latency in comparison with other caching strategies.

1. INTRODUCTION

Recent explosive growth in computer and wireless
communication technologies has led to an increasing
interest in mobile ad hoc networks (MANETs). Most of
the previous researches [1, 2, 3, 4, 5] in ad hoc networks
focus on the development of dynamic routing protocols
that can improve one-hop/multi-hop connectivity among
mobile hosts (MHs). Although routing is an important
issue in ad hoc networks, other issues such as data access
are also very important since the ultimate goal of using
such networks is to provide data access to mobile hosts
[6]. One of the most attractive techniques that improves
data availability is caching. In general caching results in
(i) enhanced QoS at the clients i.e., lower jitter, latency
and packet loss, (ii) reduced network bandwidth
consumption, and (iii) reduced data server/source
workload. In addition, reduction in bandwidth
consumption infers that a properly implemented caching
architecture for a MANET environment can potentially
improve battery life in mobile clients.
Caching has proved to be an important technique for
improving the data retrieval performance in mobile
environments [15, 16, 17, 18]. However, caching
techniques used in one-hop mobile environment (i.e.,
cellular networks) may not be applicable to multi-hop
mobile environments since the data or request may need
to go through multiple hops. As mobile clients in ad hoc
networks may have similar tasks and share common
interest, cooperative caching, which allows the sharing
and coordination of cached data among multiple clients,
can be used to reduce the bandwidth and power
consumption.
To date there are some works in literature on cooperative
caching in ad hoc networks, such as consistency [6, 7],
placement [9, 11, 12], discovery [10] and proxy caching
[13, 19, 20, 21, 22]. However, efficient cache
management, such as admission control and cache
replacement is not considered yet.
In this paper, a zone cooperative (ZC) cache is proposed
for mobile ad hoc networks. Mobile clients belonging to
the neighborhood (zone) of a given client form a
cooperative cache system for this client since the cost for
communication with them is low both in terms of energy
consumption and message exchanges. In ZC, each client
has a cache to store the frequently accessed data items.
The cache at a client is a nonvolatile memory such as hard
disk. The data items in the cache satisfy not only the
clients own requests but also the data requests passing
through it from other clients. For a data miss in the local
cache, the client first searches the data in its zone before
forwarding the request to the next client that lies on a path
towards server. To further improve the efficiency of ZC
caching, a cooperation management scheme including
cache admission control, replacement policy and cache
consistency has also been developed. Simulations prove
that ZC cache achieves higher performance than existing
caching strategies in ad hoc networks.
The rest of the paper is organized as follows. The network
and system environment are presented in Section 2.
Section 3 describes the proposed ZC caching scheme for
data retrieval. Section 4 discusses the cache management
strategy employed in ZC. Section 5 is devoted to
performance evaluation and presents detailed simulation
results. Section 6 concludes the paper and discusses future
work.

2. NETWORK AND SYSTEM ENVIRONMENT

2.1 Network Model

The topology of an ad hoc network is represented by an
undirected graph G = (V,E), where V is the set of mobile
clients MH
1
, MH
2
, ..., and EVxV is the set of links
between clients. The existence of a link (u,v)E also
means (v,u)E, and that clients u and v are within the
transmission range of each other, in which case u and v
are called one-hop neighbors of each other. The set of
one-hop neighbors of a client MH
i
is denoted by
1
i
MH
and forms a zone. The combination of clients and
transitive closure of their one-hop neighbors forms a
mobile ad hoc network. Two clients that are not connected
but share at least one common one-hop neighbor are
called two-hop neighbors of each other.
As clients can physically move, there is no guarantee that
a neighbor at time t will remain in the zone at later time
t+. The devices might be turned off/on at any time, so the
set of alive clients varies with time and has no fixed size.

2.2 System Environment

The system environment is assumed to be an ad hoc
network where mobile hosts access data items held as
originals by other mobile hosts. A mobile host that holds
the original value of a data item is called data
source/server. A data request initiated by a host is
forwarded hop-by-hop along the routing path until it
reaches the data source and then the data source sends
back the requested data. Each mobile host maintains local
cache in its hard disk. To reduce the bandwidth
consumption and query latency, the number of hops
between the data source/cache and the requester should be
as small as possible [14]. In this system environment, we
also make the following assumptions:
Assign a unique host identifier to each mobile host in
the system. The system has total of M hosts and MH
i
(1
i M) is a host identifier. Each host moves freely.
Assign a unique data identifier to each data item
located in the system. The set of all data items is
denoted by D = {d
1
, d
2
, ..., d
N
}, where N is the total
number of data items and d
j
(1 j N) is a data
identifier. D
i
denotes the actual data of the item with id
d
i
. Size of data item d
i
is s
i
(in bytes).
Each mobile host has a cache space of C bytes.

3. ZONE COOPERATIVE CACHING SCHEME

This section describes zone cooperative (ZC) caching
scheme for data retrieval in mobile ad hoc networks. The
design rationale of ZC caching is that it is considered
advantageous for a client to share cache with its neighbors
lying in the zone (i.e., mobile hosts that are accessible in
one-hop). Mobile hosts belonging to the zone of a given
host then form a cooperative cache system for this host
since the cost for communicating with them is low both in
terms of energy consumption and message exchange.
Figure 1 shows the behavior of ZC caching strategy for a
client request. For each request, one of the following four
cases holds:
Case 1: Local hit. When copy of the requested data item is
stored inside the hard disk of the requester. The data item
is retrieved to serve the query and no cooperation is
necessary.
Case 2: Zone hit. When the requested data item is stored
in the cache of one or more one-hop neighbors of the
requester. Message exchange within the home zone of the
requester is required during the cache discovery.
Case 3: Remote hit. When the data is found with a client
belonging to a zone (other than home zone of the
requester) along the routing path to the data source.
Case 4: Global hit. Data item is retrieved from the server.



Figure 1: Service of a client request by ZC caching
strategy.

3.1 Cache Discovery Process

Without knowing the destination for any requested data a
discovery algorithm is needed. When a data request is
initiated in an MH, it first looks for the data item in its
own cache. If there is a local cache miss, the MH checks if
the data item is cached in other MHs within its home
zone. When an MH receives the request and has the data
item in its local cache, it will send a reply to the requester
to acknowledge that it has the data item. In case of a zone
cache miss, the request is forwarded to the neighbor along
the routing path. Before forwarding a request, each MH
along the path searches the item in its local cache or zone
as described above. If the data item is not found on the
zones along the routing path (i.e., a remote cache miss),
the request finally reaches the data source and the data
source sends back the requested data.
Based on the above idea, we propose a cache discovery
algorithm to determine the data access path to the MH
having the requested cached data or to the data source. In
Figure 2, let us assume MH
i
sends a request for a data
item d
x
and MH
k
is located along the path in which the
request travels to the data source MH
s
, where k{a, c, d}.
The discovery algorithm is described as follows:
1. When MH
i
needs d
x
, it first checks its own cache. If
the data item is not available in its local cache (i.e., a
local cache miss), it broadcasts a request packet to the
mobile hosts in its zone (i.e., to MH
j
and MH
a
). After
MH
i
broadcasts the request, it waits for an
acknowledgement. If it does not get any
acknowledgement within a specified timeout period,
it fails to get d
x
within home zone (i.e., zone cache
miss). In case
1
i
MH (MH
j
or MH
a
) has the data item
d
x
, it sends ack packet to MH
i
.
2. When MH
k
receives a request packet, it broadcasts
the packet to
1
k
MH (i.e., mobile hosts in the zone of
MH
k
) if it does not have d
x
in its local cache. When
MH
k
receives an ack packet, it sends a confirm packet
to the ack packet sender. There may be additional ack
packets received by MH
k
from other hosts in its zone
and are discarded as it has already received an ack
packet from a host closer to it.
3. When
1
i
MH /
1
k
MH /MH
s
receives a confirm packet, it
sends the reply packet to the requester.
The reply packet containing item id d
x
, actual data D
x
and
TTL
x
, is forwarded hop-by-hop along the routing path
until it reaches the original requester. Once an MH
receives the requested data, it triggers the cache admission
control procedure to determine whether it should cache
the data item. The ZC cache management scheme is
described in the next section.




Figure 2: A request packet from client MH
i
is forwarded
to the data source MH
s
.

4. ZC CACHE MANAGEMENT

Cache management is more complex in cooperative
caching because deciding what to cache can also depend
on the clients neighbors. In this section, we present cache
management including cache replacement policy, cache
admission control and cache consistency.

4.1 Cache Replacement Policy

A cache replacement policy is required when an MH
wants to cache a data item, but the cache is full, and thus
it needs to victimize a suitable subset of data items to
evict from the cache. Cache replacement policies have
been extensively studied in operating systems, virtual
memory management and database buffer management.
However, these algorithms might be unsuitable for ad hoc
networks [7].

We have developed a value based cache replacement
policy, where documents with the lowest value are those
that are removed from the cache. Four factors are
considered while computing value of a data item at a
client:
Popularity. The access probability reflects the popularity
of a data item for a host. An item with lower access
probability should be chosen for replacement. A host
records the access probability A
i
of each data item d
i
. For
item d
i
, A
i
is initially set to zero and t
l
is set to the current
time, and then A
i
is updated as it is requested by the host
based on the following formula:
old
i
l c
new
i
A ) 1 (
t t
1
A +

=

Where t
c
is the current time, t
l
is the last access time and
is a constant factor to weigh the importance of the most
recent access. Then, t
l
is set to the current time.
Distance. Distance () is measured as the number of hops
between the requesting client and the responding client
(data source or cache). This policy incorporates the
distance as an important parameter in selecting a victim
for replacement. The greater the distance, the greater is
the value of the data item. This is because caching data
items which are further away, saves bandwidth and
reduces latency for subsequent requests.
Coherency. A data item d
i
is valid for a limited lifetime,
which is known using the TTL
i
field. An item that is valid
for shorter period should be preferred for replacement.
Size (s). A data item with larger data size should be
chosen for replacement, because the cache can
accommodate more data items and satisfy more access
requests.
Based on the above factors, the value
i
function for a data
item d
i
is computed using the following expression:
value
i
= w
1
A
i
+ w
2

i
+ w
3
TTL
i
+ w
4
/s
i

Where w
1
, w
2
, w
3
, w
4
are the weight factors such that
1 w
4
1 j
j
=

=
and 0 w
j
1. A data item with lowest value
of value function is used as victim for replacement.

4.2 Cache Admission Control

When a client receives the requested data, a cache
admission control is triggered to decide whether it should
be brought into the cache. Inserting a data item into the
cache might not always be favorable because it can lower
the probability of cache hits [7]. In this paper, the cache
admission control allows a host to cache a data item based
on the distance of data source or other host that has the
requested data. If the host or data source is one hops away
from the requesting host, then it does not cache the data;
otherwise it caches the data item. For example, if the
origin of the data resides in the same zone of the
requesting client, then the item is not cached, because it is
unnecessary to replicate data item in the same zone since
cached data can be used by closely located hosts. In
general, same data items are cached at least one hops
away.

4.3 Cache Consistency

Cache consistency issue must be addressed to ensure that
clients only access valid states of the data. Two widely
used cache consistency models are the weak consistency
and the strong consistency model. In the weak consistency
model, a stale data might be returned to the client. In the
strong consistency model, after a update completes, no
stale copy of the modified data will be returned to the
client.
Recently, we have done some work [15, 16] on
maintaining strong cache consistency in the one-hop
based mobile environment. However, due to bandwidth
and power constraints in ad hoc networks, it is too
expensive to maintain strong consistency, and the weak
consistency model is more attractive [6, 7, 9]. The ZC
caching uses a simple weak consistency model based on
the time-to-live (TTL), in which a client considers a
cached copy up-to-date if its TTL has not expired. The
client removes the cached data when the TTL expires. A
client refreshes a cached data item and its TTL if a fresh
copy of the same data passes by.


5. PERFORMANCE ANALYSIS

In this section, we evaluate the performance of ZC
caching through simulation experiments.

5.1 Simulation Model

The client query model is similar to what have been used
in our previous studies [15, 16]. The time interval between
two consecutive queries generated from each client
follows an exponential distribution with mean T
q
. After a
query is sent out the client does not generate new query
until the pending query is served. Each client generates
accesses to the data items following Zipf distribution [23]
with a skewness parameter . Similar to other studies [6,
8] we chose to be 0.8.
The simulation area is assumed of size 1500m x 1500m.
The clients move according to the random waypoint
model. Initially, the clients are randomly distributed in the
area. Each client selects a random destination and moves
towards the destination with a speed selected randomly
from [v
min
, v
max
]. After the client reaches its destination it
pauses for a period and repeats this movement pattern.
There are N data items at the server. Data item sizes vary
from s
min
to s
max
such that size s
i
of item d
i
is,

) 1 s s ().( random s s
min max min i
+ + = , i = 1, 2,... N,
where random() is a random function uniformly
distributed between 0 and 1. The data are updated only by
the server. The server serves the requests on FCFS (first-
come-first-serve) basis. When the server sends a data item
to a client, it sends the TTL value along with the data. The
TTL value is set exponentially with a mean value. After
the TTL expires, the client has to get the new version of
the item either from the server or from other client before
serving the query. Table 1 shows the system parameters.

Table 1 - Simulation parameters.

Parameter Default Value Range
Database size (N) 1000 items
s
min
1 KB
s
max
10 KB
Number of clients (M) 70 50~100
Client cache size (C) 800 KB 200~1400 KB
Client speed (v
min
~v
max
) 2 m/s 2~20 m/s
Bandwidth (b) 2 Mbps
TTL 5000 sec 200~10000 sec
Pause time 300 sec
Mean query generate time (T
q
) 5 sec 2~100 sec
Transmission range (r) 250 m 25~250 m
0.25
w
i
(1 i 4) 0.25
Skewness parameter () 0.8

5.2 Simulation Results

Here we examine the impact of cache size, query generate
time and node density over the ZC caching strategy. For
performance comparison with ZC, two other schemes
non-cooperative (NC) caching and CacheData [6] are also
implemented. In NC, locally missed data items are fetched
from the origin server. This strategy is taken to be
baseline case against which the cooperative caching
schemes are compared. In our experiments, the same data
access pattern and mobility model are applied to all the
three schemes. NC and CacheData schemes use LRU
cache replacement algorithm whereas ZC scheme employs
value based replacement strategy proposed in this paper.
Two performance metrics: cache hit ratio and average
query latency are evaluated.

Effects of cache size. Figure 3 shows the effect of cache
size on the hit ratio and query latency by varying the
cache size from 200 KB to 1400 KB. The cache hit
comprises of local hit, zone hit and remote hit. For
CacheData scheme, zone hit is always zero whereas zone
hit and remote hit are zero for NC scheme.
Figure 3a shows that the local cache hit increases with the
increasing cache size because with large cache size more
data can be shared locally. The local hit ratio for NC is
always the lowest. When cache size is small, CacheData
performs similar to NC because they both use LRU
replacement. Due to use of value based replacement, the
ZC has the highest local hit ratio at all cache sizes.
Cache size (KB)
200 400 600 800 1000 1200 1400
C
a
c
h
e

h
i
t

r
a
t
i
o
0.0
0.2
0.4
0.6
0.8
1.0
Remote hit
Zone hit
Local hit

(a)
Cache size (KB)
200 400 600 800 1000 1200 1400
A
v
e
r
a
g
e

q
u
e
r
y

l
a
t
e
n
c
y

(
s
e
c
)
0.10
0.15
0.20
0.25
0.30
0.35
NC
CacheData
ZC

(b)
Figure 3: Effect of cache size on (a) cache hit ratio, and
(b) average query latency.

Due to cooperation within a zone, the remote hit ratio of
ZC is always higher than CacheData. When the cache size
is small, the contribution due to zone hit and remote hit is
more significant.
From Figure 3b, we can see that the proposed scheme
performs much better than NC. Because of cooperation
within a zone, the ZC scheme behaves much better than
CacheData. When the cache size is small, more required
data items can be found in local+zone cache for ZC as
compared to CacheData which utilizes only the local
cache. Thus, the need for accessing the remote and global
cache in ZC is alleviated. Because the hop count of zone
data hit is one and is less than the average hop count of
remote data hit, ZC achieves lower query latency. As the
cache size is large enough, the MHs can access most of
the required data items from local, zone and remote cache,
so it reduces query latency.
Comparing these three schemes, we can see that ZC
performs much better than NC or CacheData. Because of
high overall hit ratio, ZC achieves the best performance
compared to other schemes.

Effects of query generate time. Figure 4 shows the effect
of mean query generate time T
q
. At small T
q
, more queries
are generated and hence more cache replacements take
place which results in low cache hit ratio. Due to value
based replacement, ZC behaves better than NC and
CacheData. The cache hit ratio improves with an increase
in T
q
. At very large value of T
q
, the hit ratio is low
because query generate rate is so low that the number of
cached data is small and many cached data items are not
usable because their TTL have already expired before
queries are generated for them. Figure 4a verifies this
trend.
Figure 4b shows the average query latency as a function
of the mean generate time T
q
. At small value of T
q
, the
query generate rate is high and system workload is more.
This results in high value of average query latency. When
T
q
increases, less queries are generated and average query
latency drops. If T
q
keeps increasing, the average query
latency drops slowly or even increases slightly due to
decrease in cache hit ratio. Under extreme high T
q
, most
of the queries are served by the remote data server and all
schemes perform similarly.

Effects of node density. We vary the number of mobile
nodes from 50 to 100 in network area to study the
performance under different node densities. As shown in
Figure 5a, at all node densities, each scheme shows almost
same local hit ratio. Because of value based replacement,
the local hit ratio of ZC is better than other schemes at all
the node densities. When the node density is high, the
number of MHs in a cooperation zone increases which
leads to improvement in zone hit ratio and remote hit
ratio. Due to zone cooperation, ZC performs better than
CacheData under different node densities. NC cache hit is
independent of the node density due to non-cooperative
nature.
N
C


C
a
c
h
e
D
a
t
a

Z
C

Figure 5b shows the average query latency as a function
of the node density. The query latency of NC and
CacheData schemes increases much faster than ZC
scheme. This can be explained by the fact that with
increasing node density more nodes are available in a
zone, thus increasing the size of zone and remote caches,
which causes only a marginal increase in query latency of
ZC scheme.
Mean query generate time (sec)
2 5 10 20 50 100 2 5 10 20 50 100 2 5 10 20 50 100
C
a
c
h
e

h
i
t

r
a
t
i
o
0.0
0.2
0.4
0.6
0.8
1.0
Remote hit
Zone hit
Local hit

(a)
Mean query generate time (sec)
2 5 10 20 50 100
A
v
e
r
a
g
e

q
u
e
r
y

l
a
t
e
n
c
y

(
s
e
c
)
0.10
0.15
0.20
0.25
0.30
0.35
NC
CacheData
ZC

(b)
Figure 4: Effect of mean query generate time on (a) cache
hit ratio, and (b) average query latency.

6. CONCLUSIONS

This paper presents a ZC caching scheme for efficient
data retrieval in ad hoc networks. The caching scheme is
scalable and incurs low overhead with increasing number
of nodes. The scheme enables clients in a zone to share
their data which helps alleviate the longer average query
latency and limited data accessibility problems in ad hoc
networks. The caching scheme includes a discovery
process and a cache management technique. The proposed
cache discovery algorithm ensures that a requested data is
returned from the nearest cache or server. As a part of
cache management, cache admission control, value based
replacement policy and cache consistency technique are
developed. The admission control prevents high data
replication by enforcing a minimum distance between the
same data item, while the replacement policy helps in
improving the cache hit ratio and accessibility. Cache
consistency ensures that clients only access valid states of
the data.
A simulation based performance study was conducted to
evaluate the proposed scheme. Results show that the ZC
caching scheme performs better in terms of cache hit ratio
and average query latency in comparison with other
caching strategies.
Our future work includes more extensive performance
evaluation. We also intend to extend our scheme where
each client has cache state information within a zone so
that cache management is performed on unified zone
cache rather than single local cache.
Number of nodes
50 60 70 80 90 100
C
a
c
h
e

h
i
t

r
a
t
i
o
0.0
0.2
0.4
0.6
0.8
1.0
Remote hit
Zone hit
Local hit

(a)
Number of nodes
50 60 70 80 90 100
A
v
e
r
a
g
e

q
u
e
r
y

l
a
t
e
n
c
y

(
s
e
c
)
0.10
0.15
0.20
0.25
0.30
0.35
NC
CacheData
ZC

(b)
Figure 5: Effect of node density on (a) cache hit ratio, and
(b) average query latency.

REFERENCES
1. Frodigh, M., Johansson, P., and Larsson, L., 2000,
Wireless Ad Hoc Networking The Art of

N
C

C
a
c
h
e
D
a
t
a

Z
C


N
C


C
a
c
h
e
D
a
t
a

Z
C

Networking Without a Network, Ericsson Review,
No. 4.
2. Das, S., Perkins, C., and Royer, E., 2000,
Performance Comparison of Two On-Demand
Routing Protocols for Ad Hoc Networks, IEEE
INFOCOM, 3-12.
3. Johnson, D., and Maltz, D., 1996, Dynamic Source
Routing in Ad Hoc Wireless Networks, Mobile
Computing, 158-181.
4. Perkins, C., and Bhagwat, P., 1994, Highly Dynamic
Destination-Sequenced Distance-Vector Routing
(DSDV) for Mobile Computers, ACM SIGCOMM,
234-244.
5. Perkins, C., and Royer, E.M., 1999, Ad Hoc On-
Demand Distance Vector Routing, IEEE Workshop
on Mobile Computing Systems and Applications, 90-
100.
6. Yin, L., and Cao, G., 2004, Supporting Cooperative
Caching in Ad Hoc Networks, IEEE INFOCOM,
2537-2547.
7. Cao, G., Yin, L., and Das, C., 2004, Cooperative
Cache Based Data Access Framework for Ad Hoc
Networks, IEEE Computer, 32-39.
8. Shen, H., Das, S.K., Kumar, M., and Wang, Z., 2004,
Cooperative Caching with Optimal Radius in Hybrid
Wireless Networks, NETWORKING, 41-853.
9. Wang, Z, Yin, L., and Cao, G., 2004, Secure
Cooperative Cache Based Data Access in Ad Hoc
Networks, NSF International Workshop on
Theoretical and Algorithmic Aspects of Wireless Ad
Hoc, Sensor, and Peer-to-Peer Networks.
10. Takaaki, M., and Aida, H., 2003, Cache Data Access
System in Ad Hoc Networks, Vehicular Technology
Conference (VTC), 1228-1232.
11. Nuggehalli, P., Srinivasan, V., and Chiasserini, C.-F.,
2003, Energy-Efficient Caching Strategies in Ad
Hoc Wireless Networks, MobiHoc, 25-34.
12. Papadopouli, M., and Schulzrinne, H, 2001, Effects
of Power Conservation, Wireless Coverage and
Cooperation on Data Dissemination Among Mobile
Devices, ACM SIGMOBILE Symposium on Mobile
Ad Hoc Networking and Computing (MobiHoc).
13. Lau, W.H.O., Kumar, M. and Venkatesh, S., 2002,
Cooperative Cache Architecture in Support of
Caching Multimedia Objects in MANETs, 5
th
ACM
International Workshop on Wireless Mobile
Multimedia, 56-63.
14. Hara, T., 2003, Replica Allocation Methods in Ad
Hoc Networks with Data Update, Kluwer Journal of
Mobile Networks and Applications, 8(4), 343-354.
15. Chand, N., Joshi, R.C., and Misra, M., 2004,
Broadcast Based Cache Invalidation and Prefetching
in Mobile Environment, International Conference on
High Performance Computing (HiPC), Springer-
Verlag LNCS 3296, 410-419.
16. Chand, N., Joshi, R.C., and Misra, M., 2005, Energy
Efficient Cache Invalidation in a Disconnected
Wireless Mobile Environment, International Journal
of Ad Hoc and Ubiquitous Computing (IJAHUC), to
appear.
17. Cao, G., 2002, On Improving the Performance of
Cache Invalidation in Mobile Environments,
ACM/Kluwer Mobile Networks and Applications,
7(4), 291-303.
18. Cao, G., 2003, A Scalable Low-Latency Cache
Invalidation Strategy for Mobile Environments,
IEEE Transactions on Knowledge and Data
Engineering, Vol. 15, No. 5, 1251-1265.
19. Friedman, R., Gradinariu, M., and Simon, G., 2004,
Locating Cache Proxies in MANETs, 5
th
ACM
International Symposium on Mobile Ad Hoc
Networking and Computing, 175-186.
20. Sailhan, F., and Issarny, V., 2003, Cooperative
Caching in Ad Hoc Networks, International
Conference on Mobile Data Management (MDM),
13-28.
21. Lim, S., Lee, W.-C., Cao, G., and Das, C.R., 2005,
A Novel Caching Scheme for Improving Internet-
Based Mobile Ad Hoc Networks Performance, Ad
Hoc Networks Journal, Elsevier Science, to appear.
22. Lim, S., Lee, W.-C., Cao, G., and Das, C.R., 2004,
Performance Comparison of Cache Invalidation
Strategies for Internet-Based Mobile Ad Hoc
Networks, IEEE International Conference on Mobile
Ad Hoc and Sensor Systems (MASS), 104-113.
23. Breslau, L., Cao, P., Fan, L., Phillips, G., and Sheker,
S., 1999, Web Caching and Zipf-Like Distributions:
Evidence and Implications, IEEE INFOCOM, 126-
134.

You might also like