Cooperative Cache Management in Mobile Ad Hoc Networks
Cooperative Cache Management in Mobile Ad Hoc Networks
=
Where t
c
is the current time, t
l
is the last access time and
is a constant factor to weigh the importance of the most
recent access. Then, t
l
is set to the current time.
Distance. Distance () is measured as the number of hops
between the requesting client and the responding client
(data source or cache). This policy incorporates the
distance as an important parameter in selecting a victim
for replacement. The greater the distance, the greater is
the value of the data item. This is because caching data
items which are further away, saves bandwidth and
reduces latency for subsequent requests.
Coherency. A data item d
i
is valid for a limited lifetime,
which is known using the TTL
i
field. An item that is valid
for shorter period should be preferred for replacement.
Size (s). A data item with larger data size should be
chosen for replacement, because the cache can
accommodate more data items and satisfy more access
requests.
Based on the above factors, the value
i
function for a data
item d
i
is computed using the following expression:
value
i
= w
1
A
i
+ w
2
i
+ w
3
TTL
i
+ w
4
/s
i
Where w
1
, w
2
, w
3
, w
4
are the weight factors such that
1 w
4
1 j
j
=
=
and 0 w
j
1. A data item with lowest value
of value function is used as victim for replacement.
4.2 Cache Admission Control
When a client receives the requested data, a cache
admission control is triggered to decide whether it should
be brought into the cache. Inserting a data item into the
cache might not always be favorable because it can lower
the probability of cache hits [7]. In this paper, the cache
admission control allows a host to cache a data item based
on the distance of data source or other host that has the
requested data. If the host or data source is one hops away
from the requesting host, then it does not cache the data;
otherwise it caches the data item. For example, if the
origin of the data resides in the same zone of the
requesting client, then the item is not cached, because it is
unnecessary to replicate data item in the same zone since
cached data can be used by closely located hosts. In
general, same data items are cached at least one hops
away.
4.3 Cache Consistency
Cache consistency issue must be addressed to ensure that
clients only access valid states of the data. Two widely
used cache consistency models are the weak consistency
and the strong consistency model. In the weak consistency
model, a stale data might be returned to the client. In the
strong consistency model, after a update completes, no
stale copy of the modified data will be returned to the
client.
Recently, we have done some work [15, 16] on
maintaining strong cache consistency in the one-hop
based mobile environment. However, due to bandwidth
and power constraints in ad hoc networks, it is too
expensive to maintain strong consistency, and the weak
consistency model is more attractive [6, 7, 9]. The ZC
caching uses a simple weak consistency model based on
the time-to-live (TTL), in which a client considers a
cached copy up-to-date if its TTL has not expired. The
client removes the cached data when the TTL expires. A
client refreshes a cached data item and its TTL if a fresh
copy of the same data passes by.
5. PERFORMANCE ANALYSIS
In this section, we evaluate the performance of ZC
caching through simulation experiments.
5.1 Simulation Model
The client query model is similar to what have been used
in our previous studies [15, 16]. The time interval between
two consecutive queries generated from each client
follows an exponential distribution with mean T
q
. After a
query is sent out the client does not generate new query
until the pending query is served. Each client generates
accesses to the data items following Zipf distribution [23]
with a skewness parameter . Similar to other studies [6,
8] we chose to be 0.8.
The simulation area is assumed of size 1500m x 1500m.
The clients move according to the random waypoint
model. Initially, the clients are randomly distributed in the
area. Each client selects a random destination and moves
towards the destination with a speed selected randomly
from [v
min
, v
max
]. After the client reaches its destination it
pauses for a period and repeats this movement pattern.
There are N data items at the server. Data item sizes vary
from s
min
to s
max
such that size s
i
of item d
i
is,
) 1 s s ().( random s s
min max min i
+ + = , i = 1, 2,... N,
where random() is a random function uniformly
distributed between 0 and 1. The data are updated only by
the server. The server serves the requests on FCFS (first-
come-first-serve) basis. When the server sends a data item
to a client, it sends the TTL value along with the data. The
TTL value is set exponentially with a mean value. After
the TTL expires, the client has to get the new version of
the item either from the server or from other client before
serving the query. Table 1 shows the system parameters.
Table 1 - Simulation parameters.
Parameter Default Value Range
Database size (N) 1000 items
s
min
1 KB
s
max
10 KB
Number of clients (M) 70 50~100
Client cache size (C) 800 KB 200~1400 KB
Client speed (v
min
~v
max
) 2 m/s 2~20 m/s
Bandwidth (b) 2 Mbps
TTL 5000 sec 200~10000 sec
Pause time 300 sec
Mean query generate time (T
q
) 5 sec 2~100 sec
Transmission range (r) 250 m 25~250 m
0.25
w
i
(1 i 4) 0.25
Skewness parameter () 0.8
5.2 Simulation Results
Here we examine the impact of cache size, query generate
time and node density over the ZC caching strategy. For
performance comparison with ZC, two other schemes
non-cooperative (NC) caching and CacheData [6] are also
implemented. In NC, locally missed data items are fetched
from the origin server. This strategy is taken to be
baseline case against which the cooperative caching
schemes are compared. In our experiments, the same data
access pattern and mobility model are applied to all the
three schemes. NC and CacheData schemes use LRU
cache replacement algorithm whereas ZC scheme employs
value based replacement strategy proposed in this paper.
Two performance metrics: cache hit ratio and average
query latency are evaluated.
Effects of cache size. Figure 3 shows the effect of cache
size on the hit ratio and query latency by varying the
cache size from 200 KB to 1400 KB. The cache hit
comprises of local hit, zone hit and remote hit. For
CacheData scheme, zone hit is always zero whereas zone
hit and remote hit are zero for NC scheme.
Figure 3a shows that the local cache hit increases with the
increasing cache size because with large cache size more
data can be shared locally. The local hit ratio for NC is
always the lowest. When cache size is small, CacheData
performs similar to NC because they both use LRU
replacement. Due to use of value based replacement, the
ZC has the highest local hit ratio at all cache sizes.
Cache size (KB)
200 400 600 800 1000 1200 1400
C
a
c
h
e
h
i
t
r
a
t
i
o
0.0
0.2
0.4
0.6
0.8
1.0
Remote hit
Zone hit
Local hit
(a)
Cache size (KB)
200 400 600 800 1000 1200 1400
A
v
e
r
a
g
e
q
u
e
r
y
l
a
t
e
n
c
y
(
s
e
c
)
0.10
0.15
0.20
0.25
0.30
0.35
NC
CacheData
ZC
(b)
Figure 3: Effect of cache size on (a) cache hit ratio, and
(b) average query latency.
Due to cooperation within a zone, the remote hit ratio of
ZC is always higher than CacheData. When the cache size
is small, the contribution due to zone hit and remote hit is
more significant.
From Figure 3b, we can see that the proposed scheme
performs much better than NC. Because of cooperation
within a zone, the ZC scheme behaves much better than
CacheData. When the cache size is small, more required
data items can be found in local+zone cache for ZC as
compared to CacheData which utilizes only the local
cache. Thus, the need for accessing the remote and global
cache in ZC is alleviated. Because the hop count of zone
data hit is one and is less than the average hop count of
remote data hit, ZC achieves lower query latency. As the
cache size is large enough, the MHs can access most of
the required data items from local, zone and remote cache,
so it reduces query latency.
Comparing these three schemes, we can see that ZC
performs much better than NC or CacheData. Because of
high overall hit ratio, ZC achieves the best performance
compared to other schemes.
Effects of query generate time. Figure 4 shows the effect
of mean query generate time T
q
. At small T
q
, more queries
are generated and hence more cache replacements take
place which results in low cache hit ratio. Due to value
based replacement, ZC behaves better than NC and
CacheData. The cache hit ratio improves with an increase
in T
q
. At very large value of T
q
, the hit ratio is low
because query generate rate is so low that the number of
cached data is small and many cached data items are not
usable because their TTL have already expired before
queries are generated for them. Figure 4a verifies this
trend.
Figure 4b shows the average query latency as a function
of the mean generate time T
q
. At small value of T
q
, the
query generate rate is high and system workload is more.
This results in high value of average query latency. When
T
q
increases, less queries are generated and average query
latency drops. If T
q
keeps increasing, the average query
latency drops slowly or even increases slightly due to
decrease in cache hit ratio. Under extreme high T
q
, most
of the queries are served by the remote data server and all
schemes perform similarly.
Effects of node density. We vary the number of mobile
nodes from 50 to 100 in network area to study the
performance under different node densities. As shown in
Figure 5a, at all node densities, each scheme shows almost
same local hit ratio. Because of value based replacement,
the local hit ratio of ZC is better than other schemes at all
the node densities. When the node density is high, the
number of MHs in a cooperation zone increases which
leads to improvement in zone hit ratio and remote hit
ratio. Due to zone cooperation, ZC performs better than
CacheData under different node densities. NC cache hit is
independent of the node density due to non-cooperative
nature.
N
C
C
a
c
h
e
D
a
t
a
Z
C
Figure 5b shows the average query latency as a function
of the node density. The query latency of NC and
CacheData schemes increases much faster than ZC
scheme. This can be explained by the fact that with
increasing node density more nodes are available in a
zone, thus increasing the size of zone and remote caches,
which causes only a marginal increase in query latency of
ZC scheme.
Mean query generate time (sec)
2 5 10 20 50 100 2 5 10 20 50 100 2 5 10 20 50 100
C
a
c
h
e
h
i
t
r
a
t
i
o
0.0
0.2
0.4
0.6
0.8
1.0
Remote hit
Zone hit
Local hit
(a)
Mean query generate time (sec)
2 5 10 20 50 100
A
v
e
r
a
g
e
q
u
e
r
y
l
a
t
e
n
c
y
(
s
e
c
)
0.10
0.15
0.20
0.25
0.30
0.35
NC
CacheData
ZC
(b)
Figure 4: Effect of mean query generate time on (a) cache
hit ratio, and (b) average query latency.
6. CONCLUSIONS
This paper presents a ZC caching scheme for efficient
data retrieval in ad hoc networks. The caching scheme is
scalable and incurs low overhead with increasing number
of nodes. The scheme enables clients in a zone to share
their data which helps alleviate the longer average query
latency and limited data accessibility problems in ad hoc
networks. The caching scheme includes a discovery
process and a cache management technique. The proposed
cache discovery algorithm ensures that a requested data is
returned from the nearest cache or server. As a part of
cache management, cache admission control, value based
replacement policy and cache consistency technique are
developed. The admission control prevents high data
replication by enforcing a minimum distance between the
same data item, while the replacement policy helps in
improving the cache hit ratio and accessibility. Cache
consistency ensures that clients only access valid states of
the data.
A simulation based performance study was conducted to
evaluate the proposed scheme. Results show that the ZC
caching scheme performs better in terms of cache hit ratio
and average query latency in comparison with other
caching strategies.
Our future work includes more extensive performance
evaluation. We also intend to extend our scheme where
each client has cache state information within a zone so
that cache management is performed on unified zone
cache rather than single local cache.
Number of nodes
50 60 70 80 90 100
C
a
c
h
e
h
i
t
r
a
t
i
o
0.0
0.2
0.4
0.6
0.8
1.0
Remote hit
Zone hit
Local hit
(a)
Number of nodes
50 60 70 80 90 100
A
v
e
r
a
g
e
q
u
e
r
y
l
a
t
e
n
c
y
(
s
e
c
)
0.10
0.15
0.20
0.25
0.30
0.35
NC
CacheData
ZC
(b)
Figure 5: Effect of node density on (a) cache hit ratio, and
(b) average query latency.
REFERENCES
1. Frodigh, M., Johansson, P., and Larsson, L., 2000,
Wireless Ad Hoc Networking The Art of
N
C
C
a
c
h
e
D
a
t
a
Z
C
N
C
C
a
c
h
e
D
a
t
a
Z
C
Networking Without a Network, Ericsson Review,
No. 4.
2. Das, S., Perkins, C., and Royer, E., 2000,
Performance Comparison of Two On-Demand
Routing Protocols for Ad Hoc Networks, IEEE
INFOCOM, 3-12.
3. Johnson, D., and Maltz, D., 1996, Dynamic Source
Routing in Ad Hoc Wireless Networks, Mobile
Computing, 158-181.
4. Perkins, C., and Bhagwat, P., 1994, Highly Dynamic
Destination-Sequenced Distance-Vector Routing
(DSDV) for Mobile Computers, ACM SIGCOMM,
234-244.
5. Perkins, C., and Royer, E.M., 1999, Ad Hoc On-
Demand Distance Vector Routing, IEEE Workshop
on Mobile Computing Systems and Applications, 90-
100.
6. Yin, L., and Cao, G., 2004, Supporting Cooperative
Caching in Ad Hoc Networks, IEEE INFOCOM,
2537-2547.
7. Cao, G., Yin, L., and Das, C., 2004, Cooperative
Cache Based Data Access Framework for Ad Hoc
Networks, IEEE Computer, 32-39.
8. Shen, H., Das, S.K., Kumar, M., and Wang, Z., 2004,
Cooperative Caching with Optimal Radius in Hybrid
Wireless Networks, NETWORKING, 41-853.
9. Wang, Z, Yin, L., and Cao, G., 2004, Secure
Cooperative Cache Based Data Access in Ad Hoc
Networks, NSF International Workshop on
Theoretical and Algorithmic Aspects of Wireless Ad
Hoc, Sensor, and Peer-to-Peer Networks.
10. Takaaki, M., and Aida, H., 2003, Cache Data Access
System in Ad Hoc Networks, Vehicular Technology
Conference (VTC), 1228-1232.
11. Nuggehalli, P., Srinivasan, V., and Chiasserini, C.-F.,
2003, Energy-Efficient Caching Strategies in Ad
Hoc Wireless Networks, MobiHoc, 25-34.
12. Papadopouli, M., and Schulzrinne, H, 2001, Effects
of Power Conservation, Wireless Coverage and
Cooperation on Data Dissemination Among Mobile
Devices, ACM SIGMOBILE Symposium on Mobile
Ad Hoc Networking and Computing (MobiHoc).
13. Lau, W.H.O., Kumar, M. and Venkatesh, S., 2002,
Cooperative Cache Architecture in Support of
Caching Multimedia Objects in MANETs, 5
th
ACM
International Workshop on Wireless Mobile
Multimedia, 56-63.
14. Hara, T., 2003, Replica Allocation Methods in Ad
Hoc Networks with Data Update, Kluwer Journal of
Mobile Networks and Applications, 8(4), 343-354.
15. Chand, N., Joshi, R.C., and Misra, M., 2004,
Broadcast Based Cache Invalidation and Prefetching
in Mobile Environment, International Conference on
High Performance Computing (HiPC), Springer-
Verlag LNCS 3296, 410-419.
16. Chand, N., Joshi, R.C., and Misra, M., 2005, Energy
Efficient Cache Invalidation in a Disconnected
Wireless Mobile Environment, International Journal
of Ad Hoc and Ubiquitous Computing (IJAHUC), to
appear.
17. Cao, G., 2002, On Improving the Performance of
Cache Invalidation in Mobile Environments,
ACM/Kluwer Mobile Networks and Applications,
7(4), 291-303.
18. Cao, G., 2003, A Scalable Low-Latency Cache
Invalidation Strategy for Mobile Environments,
IEEE Transactions on Knowledge and Data
Engineering, Vol. 15, No. 5, 1251-1265.
19. Friedman, R., Gradinariu, M., and Simon, G., 2004,
Locating Cache Proxies in MANETs, 5
th
ACM
International Symposium on Mobile Ad Hoc
Networking and Computing, 175-186.
20. Sailhan, F., and Issarny, V., 2003, Cooperative
Caching in Ad Hoc Networks, International
Conference on Mobile Data Management (MDM),
13-28.
21. Lim, S., Lee, W.-C., Cao, G., and Das, C.R., 2005,
A Novel Caching Scheme for Improving Internet-
Based Mobile Ad Hoc Networks Performance, Ad
Hoc Networks Journal, Elsevier Science, to appear.
22. Lim, S., Lee, W.-C., Cao, G., and Das, C.R., 2004,
Performance Comparison of Cache Invalidation
Strategies for Internet-Based Mobile Ad Hoc
Networks, IEEE International Conference on Mobile
Ad Hoc and Sensor Systems (MASS), 104-113.
23. Breslau, L., Cao, P., Fan, L., Phillips, G., and Sheker,
S., 1999, Web Caching and Zipf-Like Distributions:
Evidence and Implications, IEEE INFOCOM, 126-
134.