Sensor Network Protocol Design and Implementation: Philip Levis UC Berkeley
Sensor Network Protocol Design and Implementation: Philip Levis UC Berkeley
Philip Levis
UC Berkeley
Communication is expensive.
Idle listening is the principal energy cost.
Radio hardware transition times can be important.
Low transmission rates can lower cost of idle listening.
Constraints, continued
Uncontrolled environments that drive execution
Variation over time and space
The uncommon is common
Unforseen corner cases and aberrations
Design Considerations
Uncontrolled environment: simplicity is critical.
The world will find your edge conditions for you.
Simplicity and fault tolerance can be more important
than raw performance.
Dissemination
Fundamental networking protocol
Reconfiguration
Reprogramming
Management
10
11
Rapid propagation
When new data appears, it should propagate quickly
Scalability
Protocol must operate in a wide range of densities
Cannot require a priori density information
12
Probabilistic broadcasts
Discrete effort (terminate): does not handle disconnection
13
Solution: Trickle
14
Solution: Trickle
Every once in a while, broadcast what data you
have, unless youve heard some other nodes
broadcast the same thing recently.
15
Solution: Trickle
Every once in a while, broadcast what data you
have, unless youve heard some other nodes
broadcast the same thing recently.
Behavior (simulation and deployment):
Maintenance: a few sends per hour
Propagation: less than a minute
Scalability: thousand-fold density changes
16
Solution: Trickle
Every once in a while, broadcast what data you
have, unless youve heard some other nodes
broadcast the same thing recently.
Behavior (simulation and deployment):
Maintenance: a few sends per hour
Propagation: less than a minute
Scalability: thousand-fold density changes
17
Trickle Assumptions
Broadcast medium
Concise, comparable metadata
Given A and B, know if one needs an update
18
19
Trickle Algorithm
Time interval of length
Redundancy constant k (e.g., 1, 2)
Maintain a counter c
Pick a time t from [0, ]
At time t, transmit metadata if c < k
Increment c when you hear identical metadata to your own
Transmit updates when you hear older metadata
At end of , pick a new t
20
c
A
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
21
c
A
tA1
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
22
c
A
tA1
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
23
c
A
tA1
tC1
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
24
c
A
tA1
tC1
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
25
c
A
tA1
tB1
tC1
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
26
c
A
tA1
tB1
tC1
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
27
c
A
tA1
tB1
tB2
tC1
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
28
c
A
tA1
tB1
tB2
tC1
tC2
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
29
c
A
tA1
tA2
tB1
tB2
tC1
tC2
time
transmission
CS 268, Spring 2005
suppressed transmission
reception
30
Experimental Methodology
High-level, algorithmic simulator
Single-hop network with a uniform loss rate
31
Maintenance Evaluation
Start with idealized assumptions, relax each
Lossless cell
Perfect interval synchronization
Single hop network
32
Loss
(algorithmic simulator)
Transmissions/Interval
12
10
8
60%
40%
20%
0%
6
4
2
0
1
16
32
64
128
256
Motes
CS 268, Spring 2005
33
34
Synchronization
(algorithmic simulator)
14
Transmissions/Interval
12
10
8
Not Synchronized
Synchronized
4
2
0
1
16
32
63
128
256
Motes
CS 268, Spring 2005
35
A
B
C
D
Time
CS 268, Spring 2005
36
Listen-only period
37
14
Transmissions/Interval
12
10
Not Synchronized
Synchronized
Listening
8
6
4
2
0
1
16
32
63
128
256
Motes
CS 268, Spring 2005
38
Multihop Network
(TOSSIM)
(transmissions + receptions)
-k
intervals
Redundancy:
Redundancy
3.5
3
2.5
No Collisions
Collisions
2
1.5
1
0.5
0
16
32
64
128
256
512
1024
Motes
39
Empirical Validation
(TOSSIM and deployment)
1-64 motes on a table, low transmit power
40
Maintenance Overview
Trickle maintains a per-node communication rate
Scales logarithmically with density, to meet the
per-node rate for the worst case node
Communication rate is really a number of
transmissions over space
41
Small interval
Higher transmission rate (higher maintenace cost)
Lower latency to discovery (faster propagation)
Examples (k=1)
At = 10 seconds: 6 transmits/min, discovery of 5 sec/hop
At = 1 hour: 1 transmit/hour, discovery of 30 min/hop
CS 268, Spring 2005
42
Speeding Propagation
Adjust : l, h
When expires, double up to h
When you hear newer metadata, set to l
When you hear newer data, set to l
When you hear older metadata, send data
43
Simulated Propagation
New data (20 bytes)
at lower left corner
16 hop network
Time to reception in
seconds
18-20
16-18
Set l = 1 sec
14-16
12-14
10-12
Set h = 1 min
8-10
6-8
4-6
2-4
0-2
Time
44
Empirical Propagation
Deployed 19 nodes in office setting
Instrumented nodes for accurate installation times
40 test runs
45
Empirical Propagation
Deployed 19 nodes in office setting
Instrumented nodes for accurate installation times
40 test runs
46
Network Layout
(about 4 hops)
47
Empirical Results
k=1, l=1 second, h=20 minutes
Mote Distribution, h=20m, k=1
30%
25%
20%
15%
10%
5%
0%
0
10
15
20
25
30
35
40
45+
Time (seconds)
48
Network Layout
(about 4 hops)
49
Empirical Results
k=1, l=1 second, h=20 minutes
Mote Distribution, h=20m, k=1
30%
25%
20%
15%
10%
5%
0%
0
10
15
20
25
30
35
40
45+
Time (seconds)
50
Dissemination
Trickle scales logarithmically with density
Can obtain rapid propagation with low
maintenance
In example deployment, maintenance of a few
sends/hour, propagation of 30 seconds
51
52
Aggregation Routing
Collect data aggregates from a network
How many nodes are there?
What is the mean temperature?
What is the median temperature?
53
54
Tree-based routing
Used in:
Query delivery
Data collection
Q:SELECT
R:{}
R:{}
Q
Q
D
R:{}Q
R:{}Q
E
CS 268, Spring 2005
Continuous process
Mitigates failures
R:{}
Q
Q
F
Q
55
Basic Aggregation
In each epoch:
local readings
readings from children
Extras:
3
4
5
56
Illustration: Aggregation
SELECT COUNT(*)
FROM sensors
Interval 4
Sensor #
1
1
Interval #
Epoch
5
1
3
2
1
4
4
1
5
57
Illustration: Aggregation
SELECT COUNT(*)
FROM sensors
Interval 3
Sensor #
1
1
Interval #
4
3
5
1
2
1
58
Illustration: Aggregation
SELECT COUNT(*)
FROM sensors
Interval 2
Sensor #
1
1
Interval #
3
2
2
1
59
Illustration: Aggregation
SELECT COUNT(*)
FROM sensors
Sensor #
Interval 1
1
1
Interval #
2
1
1
5
60
Illustration: Aggregation
SELECT COUNT(*)
FROM sensors
Interval 4
Sensor #
1
1
Interval #
2
1
1
5
61
SELECT
COUNT(*)
Interval # = Level
5 4
Level = 1
Z
Comm Interval
Z
2 1
L T
3
Z
5
Z
Epoch
2
L T
L T
L T
Pipelining: Increase
throughput by delaying
5
result arrival until a later epoch
62
Aggregation Framework
Support any aggregation function conforming to:
Aggn={finit, fmerge, fevaluate}
Finit {a0}
<a0>
aggregate value
Example: Average
AVGinit
{v}
<v,1>
< S 1 + S2 , C 1 + C 2 >
AVGevaluate{<S, C>}
S/C
63
Types of Aggregates
SQL supports MIN, MAX, SUM, COUNT, AVERAGE
Any function over a set can be computed via TAG
In network benefit for many operations
E.g. Standard deviation, top/bottom N, spatial
union/intersection, histograms, etc.
Compactness of PSR
64
TAG/TinyDB Observations
Complex: requires a collection tree as well as
pretty good time synchronization
Fragile: single lost result can greatly perturb result
In practice, really hard to get working.
Sonoma data yield < 50%
Intel TASK project (based on TinyDB) has had many
deployment troubles/setbacks (GDI 2004)
65
66
Basic Observation
Fragility comes from duplicate and order sensitivity
A PSR included twice will perturb the result
Computational model bound to communication model
67
Synopsis Diffusion
Order and duplicate insensitive (ODI) aggregates
Every node generates a sketch of its value
Aggregation combines sketches in an ODI way
E.g., take a boolean OR
68
More Specifically
Three functions:
Generate initial sketch
(produce a bit field)
Merge sketches (ODI)
Compute aggregate
from complete sketch
3
4
5
69
Example
Three functions:
Generate initial sketch
(produce a bit field)
Merge sketches (ODI)
Compute aggregate
from complete sketch
3
4
5
70
Example
Three functions:
Generate initial sketch
(produce a bit field)
Merge sketches (ODI)
Compute aggregate
from complete sketch
3
4
4
5
71
Example
Three functions:
Generate initial sketch
(produce a bit field)
Merge sketches (ODI)
Compute aggregate
from complete sketch
123
2 4
3
45
4
5
345
72
Example
Three functions:
Generate initial sketch
(produce a bit field)
Merge sketches (ODI)
Compute aggregate
from complete sketch
12345
2 45
3
45
4
5
345
73
Example
Three functions:
Generate initial sketch
(produce a bit field)
Merge sketches (ODI)
Compute aggregate
from complete sketch
12345
2 45
3
45
4
5
345
74
75
Count Example
Three functions:
Generate initial sketch
Merge sketches (ODI)
00001
Compute aggregate
from complete sketch
00100
2
00010
3
4
00001
00001
76
Count Example
Three functions:
Generate initial sketch
Merge sketches (ODI)
00011
Compute aggregate
from complete sketch
00101
2
00011
3
4
00001
00011
77
Count Example
Three functions:
Generate initial sketch
Merge sketches (ODI)
00011
Compute aggregate
from complete sketch
00111
2
00011
3
4
00001
00011
78
Count Example
Three functions:
Generate initial sketch
Merge sketches (ODI)
00011
Compute aggregate
from complete sketch
00111
2
00011
3
4
00001
00011
79
Count Example
Three functions:
Generate initial sketch
Merge sketches (ODI)
00011
Compute aggregate
from complete sketch
01000
2
00011
3
4
00001
00011
80
Count Example
Three functions:
Generate initial sketch
Merge sketches (ODI)
00011
Compute aggregate
from complete sketch
8/1.556 = 5.14
01000
2
00011
3
4
00001
00011
81
ODI Issues
Sketches are robust, but are inaccurate estimates.
Standard deviation of error is 1.13 bits
82
A Stream of Sketches
ODI rings
Only merge
in lower rings
83
A Stream of Sketches
ODI rings
Only merge
in lower rings
Example: hop count
84
85
86
Implementation Experience
TAG: implemented in TinyDB system
Two months of work to get TinyDB to work in deployment.
Very low data yield, no-one has been able to get it to
work again (TASK project).
87
Design Considerations
Uncontrolled environment: simplicity is critical.
The world will find your edge conditions for you.
Simplicity and fault tolerance can be more important
than raw performance.
88
Questions
89