Path Discovery Mechanism/Sanity

From OLPC
Jump to: navigation, search

This page is dedicated to the following investigation: Does the Path Discovery Mechanism (PDM) behaves as expected in terms of the path length?

There's been observations that the implemented PDM may wrongly result in unnecessarily long path lengths. The canonical example is the ultra-dense mesh cloud of a classroom, where all the nodes are within the transmission range of each other. In such a scenario, does the PDM form multihop paths too often? How often?

And if so, can we expect that all the paths will be single hop in this scenario? Should we try to fix this or reduce this and how could we do it?

In trying to answer the questions above, others will came. For instance: how data traffic and PDM traffic influence each other. Will the burstiness of PDM's traffic significantly degrade data throughput? And will data traffic prevent path formation or induce unnecessarily long paths?

This page is dedicated to the formulation of this questions and, hopefully, to some of the answers.


Question #1: Is it true that multihop paths form even if all the nodes are close to each other?

For this initial step, we'll use a Ten Kids Under the Tree testbed in which ten XOs are placed in a very dense configuration (less than 2 square-meters area) and every node are within transmission range of each other.

In such a situation one would expect the paths to be mostly, if not all, single-hop paths. Because a successful transmission at 54Mbps between any two nodes are theoretically always possible. A path with two or mode hops will have to be explained and, the longer the path, the more under optimal and suspicious the results would be.

The test

The tests were performed in a quiet environment (noise floor < -96dbm), and the used build was 703 (Update.1-703) with libertas firmware 22.p10. In order to use such firmware release, a special test kernel was used. Also, the following configurations were applied:

  • mesh ttl was set to 5
  • multicast pings were enabled

The workload used for this particular test was multicast pings (addressed to 224.0.0.1, default size, 1Hz). The test was performed three times, (1 minute for each test).

What are we measuring ?

One possibility would be just log the forwarding table of each node during the test and then check the path lengths. That's a good test. But this method was designed to be non-intrusive and, because of that, applicable to any traffic captures available (for instance from the Collaboration testbed).

So, to have a taste (or measure, if you will) of how (in)sane the PDM is in terms of path length, we'll analyse traffic captures and calculate an indexes for every path - the Data Path Length Index (DPLI)

The DPLI are calculated over the average hop count for each data frame exchanged between two nodes (see example in table 1). Of course this index is not strictly necessary (the table itself brings interesting information) and of course it has no meaning by itself. It is not an actual path length, but an indicator of the actual path lengths that formed during the test. Apart from confirming that multihop frames are being formed, it gives us some baselines for future comparisons and make it easier to analyse overall results.

Table 1, shows the DPLIs for a node (B) in a real test. We can see that the paths to C, J and H are pure single hop paths. But for all other neighbours a longer path did exist at some point. One should keep in mind that the paths are constantly rediscovered because (1) of expiration (which defaults to 10 seconds) of a path or (2) of fail to forward traffic. We can also clearly note that the path from B to D was much longer than any other path in this test, and this may be worth investigating.


Table 1 Node B's DPLIs for each neighbour
neigh 1hop 2hops 3hops 4hops 5hops DPLI
G 90.0% 10.0% 0.0% 0.0% 0.0% 1.05
A 82.1% 17.9% 0.0% 0.0% 0.0% 1.09
J 100.0% 0.0% 0.0% 0.0% 0.0% 1.00
D 59.0% 24.0% 4.0% 6.0% 7.0% 1.33
F 87.1% 8.6% 4.3% 0.0% 0.0% 1.08
H 100.0% 0.0% 0.0% 0.0% 0.0% 1.00
E 89.6% 10.4% 0.0% 0.0% 0.0% 1.05
I 86.5% 13.5% 0.0% 0.0% 0.0% 1.07
C 100.0% 0.0% 0.0% 0.0% 0.0% 1.00


Test results

Table 2, consolidates the results for the 3 runs of the multicast ping test described above. It clearly indicates that multihop paths do form in the ten kids under the tree scenario.

Table 2 Consolidated result for the ten kids under the tree scenario
node DPLI stdev
A 1.07 0.08
C 1.05 0.07
B 1.07 0.08
E 1.04 0.06
D 1.06 0.07
G 1.05 0.07
F 1.05 0.07
I 1.04 0.05
H 1.06 0.07
J 1.06 0.08

Global result: 1.05 (stdev = 0.01)

The answer

Yes. It is true that multihop paths form even in a Kids under the Tree scenario. Exactly 9.99% of the data frames captured in this test were transmitted over a multihop path. Is it excessive? What are the root causes for this? Keep reading ;-)


Question #2 - Does PDM traffic self-interfere?

We already confirmed that multihop paths do form in the ten kids under the tree scenario, now it is time to investigate this deeper.

In the "ten nodes under the tree scenario" the measured was DPLI 1.05, maybe not too high, but can it become intolerable in case more nodes are added or more traffic are generated? Another suspicion worth investigating is that path discovery traffic can self-interfere. There is a lot of sense to this, since PDM is bursty traffic.

The test

To investigate this, we varied the route expiration time in the ten kids under the tree scenario and repeated the same test as above (*** link). We calculated the DPLI and also the percentage of data frames that were transmitted over multihop paths.

Test results

Table 3 - The self-interference of PDM traffic
Route exp time (s) Mean DPLI Stdev DPLI Multihop frames %
5 1.09 0.02 16.58
10 1.05 0.01 9.99
15 1.05 0.01 8.72
20 1.05 0.01 9.65
25 1.04 0.01 7.44
30 1.03 0.01 6.24


The answer

Yes. The more you have path discovery traffic the longer your paths will get. But why? That's question #3

Question #3 - Why more traffic will cause longer paths?

Since PDM traffic self-interferes it seems safe to assume that any traffic will interfere with the path discovery traffic and, as a result, increase the average path length. But why?

There nay be more than one answer here.

Answer #3.1: Not all nodes decode the same frames (even if close to each other)

A common misunderstanding about dense wireless environments is that every node will capture the same traffic simply because they are very close and share a common transmission area. If this was the case, either a frame would be received by all the nodes or a collision would render a frame useless to all of them. But this is not true at all.

  • Obstructions and reflections (the unpredictable multipath effect) change the probability that a frame is successfully decoded by a node. Even nodes some centimetres apart will experience different signal levels.
  • The internal condition of the node (from the signal levels to the protocol stack) may cause a frame that was successfully decoded by a nearby neighbour to to be lost.
  • Slight differences in the antenna gain and in the radio sensitivity between two neighbours (to name only two items) are normal.
  • And, more interestingly, even if none of the above held true, and we had this perfect scenario (no obstructions or reflections, no differences in the devices or in the internal states), two nodes would still not always successfully decode the same subset of the transmitted frames because of the capture effect capture effect (not to be confused with the channel capture effect).

The above can be easily proved. It is enough to compare traffic captures and check for differences and that's what we did in the following tests.

Test #1

In this experiment, 4 XOs (B,D,G,I) captured traffic while 2 other XOs (A,J) exchanged UDP datagrams. Each node transmitted 100 datagrams to a multicast address. So, ideally each of the four sniffers would capture 400 UDP frames (100 from A, 100 from B, the 100 from J retransmitted by A, and the 100 from A retransmitted by J). The results are shown in Table 5.

Table 5 Number of captured frames
node fromA from J retransmitted by J retransmitted by A
A - 97 - -
B 98 100 98 97
D 98 100 98 97
G 98 98 97 97
I 98 100 97 97
J 98 - - -


Note that we are not interested in the fact that not all sniffers captured all the transmitted frames, but in the fact that nodes G and I captured less frames,than nodes B and D though they were sitting on the same table. Actually, less is not the point. The point is that they captured a different subset of the transmitted datagrams. Both G and I captured 97 frames retransmitted by J, but only 96 frames in common. A slight difference that nevertheless proves our point.

But is this phenomenon specific to a class of traffic? To prove that this can affect any traffic irrespectively we performed the following test.

Test #2

This time nodes A and J exchanged traffic while E and F captured the frames. Table 6 shows the difference of the traffic captured by E and F, paying special attention to PDM traffic.

Table 6 Comparison on the number captured frames
Node total captured frames from A to A PREQs from A PREQs from A (54Mbps) from J to J PREQs from J PREQs from J (54Mbps)
E 13,618 1678 469 145 29 1790 430 145 36
F 13,496 1687 466 150 32 1681 428 140 36

Table 6 shows that is possible that a PREQ be successfully decoded by a group of nodes, but not by the intended destination of the PREQ (we'll see why this is relevant shortly). But is this related to the modulation technique or data rate used by? Table 7 proved that this can be observed irrespective of the datarate:

Table 7 - Captured PDM
Node PREQs PREPs 54 36 11 1
E 7334 326 4525 1741 1947 2992
F 7298 328 4484 1735 1923 2978


Note that though F decoded 0.9% less frames than E does not mean that it will perform worse in respect of every frame. Node F actually captured a little more of the PREPs transmitted at 54Mbps.

Also interesting is the fact that the frames transmitted at more robust reliable datarates of 1Mbps (like the PREPs in table 7) or 2Mbps (like the UDP datagrams of the test #1) are not free of this effect either. We have no reason to believe that the differences in the set of decoded frames are dependent to datarate but neither we state that they are not. This is just irrelevant for now.


So it is true that not all nodes will decode the same frames, still why this will cause multihop paths?

It does in combination with the way the PDM the metrics calculations work.

the following scenario is possible because it is true that not all nodes will decode the same frames (even if close to each other):

Node "A" broadcasts a path request at 54Mbps to find node "J". Node "J" does not decode this frame, but some other nodes around, say node "C", do. Some milliseconds before that, node "C" will rebroadcast the PREQ from "A", and this time, "J" will listen and respond with a "PREP" and you have a two hops path, where a single hop would be preferred (in principle).

Note that it doesn't make any difference if B receives or not any of the other PREQs sent from A (at the datarates of 36, 11 or 1Mbps). And this is because a two 54Mbps PREQ (second line of table 4) will beat one 36Mbps (third line of table 8).

Table 8 - Possible combinations of link costs for mesh ttl5 - 12 lower costs
total cost datarates for each link
13 54
26 54,54
28 36
39 54,54,54
41 54,36
46 11
52 54,54,54,54
54 54,54,36
56 36,36
59 54,11
64 1
65 54,54,54,54,54


In short

Our answer #3.1 is based on two proved assumptions:

  • The fact that not all nodes decode the same frames
  • The fact that a two hops path may be caused by the above

Is answer #3.1 the only one possible? Does it account for all multihop paths registered in a dense mesh cloud?


Question #4 - Can we fix or reduce this (supposedly unnecessary multihop paths) by adjusting metric costs?

In answer #3.1 we pointed out two causes for the creation of the multihop paths in the very dense scenario of a classroom. Could we tune our metrics differently so to remove multihop paths?

And the answer is clear. No. Changing the costs associated to datarates is PREQ frames is not enough to fix this. This is true, because actually the fact that not all nodes will decode the same frames is a sufficient condition for the formation of a multihop path. This is the very reason that a node relays other node's traffic and there is nothing you can do in the metrics to avoid that. This would happen even if we used hop counts as a metric. If the intended target of the PREQ frame does not get it first hand (and another node does), a multihop path may form. This is a feature, not a bug!

If, on the other hand, we argue that in a ultra dense mesh cloud mutihops paths must be avoided, that's a completely different issue. That's the equivalent of proposing that ad-hoc links be used, instead of a true manet.

But we can argue that in a classroom multihop links are something unnecessary and it would be good to avoid them (or reduce its occurrence) without killing the mesh. How can we do that? So, we have actually two questions now:

  • Can we reduce this (supposedly unnecessary multihop paths) by adjusting metric costs? (fixing is impossible)
  • If adjusting the metrics is not a fix, what else could we do? That's question #5.

Question #5 - If adjusting the metrics is not a fix, what else could we do?

We said that if the intended target of the PREQ frame does not get it first hand (and another node does), a multihop path may form. Can we improve the probability that a node gets the PREQ targeted to it?

Yes. We must certainly can. We already identified that PDM traffic self-interferes. So, if we reduce PDM traffic, we'll have a positive impact on the path length (already proved ***). So the next question is: Can we afford to do that, without sacrificing the reliability of the PDM?

Question #6 - How can we reduce PDM traffic without hurting path acquisition time or reliability?

First lets examine the available ways to reduce PDM traffic

  • Increase path expiration time
  • Reduce the number of PREQ in a cluster
  • Increase PREQ delay time
  • Reduce ttl

But reducing airtime is equivalent of reducing the airtime:

  • Increase multicast rate