Path Discovery Mechanism/Sanity

From OLPC
< Path Discovery Mechanism
Revision as of 19:38, 9 May 2008 by Carrano (talk | contribs) (Question #2 - Does PDM traffic self-interfere?)
Jump to: navigation, search

This page is dedicated to the following investigation: Does the Path Discovery Mechanism (PDM) behaves as expected in terms of the path length?

There's been observations that the implemented PDM may wrongly result in unnecessarily long path lengths.

The canonical example is the ultra-dense mesh cloud of a classroom, where all the nodes are within the transmission range of each other. In such a scenario, does the PDM form multihop paths too often? How often?

And if so, can we expect that all the paths will be single hop in this scenario? Should we try to fix this or reduce this and how could we do it?

In trying to answer the questions above, others will came.

How data traffic and PDM traffic influence each other. Will the burstiness of PDM's traffic significantly degrade data throughput? And will data traffic prevent path formation or induce unnecessarily long paths?

Let's try to answer this questions, one by one.


Question #1: Is it true that multihop paths form even if all the nodes are close to each other?

For this initial step, we'll use a Ten Kids Under the Tree testbed in which ten XOs are placed in a very dense configuration (less than 2 square-meters area) and every node are within transmission range of each other.

In such a situation one would expect the paths to be mostly, if not all, single-hop paths. Because a successful transmission at 54Mbps between any two nodes are theoretically always possible. A path with two or mode hops will have to be explained and, the longer the path, the more under optimal and suspicious the results would be.

The test

The tests were performed in a quiet environment (noise floor < -96dbm), and the used build was 703 (Update.1-703) with libertas firmware 22.p10. In order to use such firmware release, a special test kernel was used. Also, the following configurations were applied:

  • mesh ttl was set to 5
  • multicast pings were enabled

The workload used for this particular test was multicast pings (addressed to 224.0.0.1, default size, 1Hz). The test was performed three times, (1 minute for each test).

What are we measuring ?

One possibility would be just log the forwarding table of each node during the test and then check the path lengths. That's a good test. But this method was designed to be non-intrusive and, because of that, applicable to any traffic captures available (for instance from the Collaboration testbed).

So, to have a taste (or measure, if you will) of how (in)sane the PDM is in terms of path length, we'll analyse traffic captures and calculate an indexes for every path - the Data Path Length Index (DPLI)

The DPLI are calculated over the average hop count for each data frame exchanged between two nodes (see example in table 1). Of course this index is not strictly necessary (the table itself brings interesting information) and of course it has no meaning by itself. It is not an actual path length, but an indicator of the actual path lengths that formed during the test. Apart from confirming that multihop frames are being formed, it gives us some baselines for future comparisons and make it easier to analyse overall results.

Table 1, shows the DPLIs for a node (B) in a real test. We can see that the paths to C, J and H are pure single hop paths. But for all other neighbours a longer path did exist at some point. One should keep in mind that the paths are constantly rediscovered because (1) of expiration (which defaults to 10 seconds) of a path or (2) of fail to forward traffic. We can also clearly note that the path from B to D was much longer than any other path in this test, and this may be worth investigating.


Table 1 Node B's DPLIs for each neighbour
neigh 1hop 2hops 3hops 4hops 5hops DPLI
G 90.0% 10.0% 0.0% 0.0% 0.0% 1.05
A 82.1% 17.9% 0.0% 0.0% 0.0% 1.09
J 100.0% 0.0% 0.0% 0.0% 0.0% 1.00
D 59.0% 24.0% 4.0% 6.0% 7.0% 1.33
F 87.1% 8.6% 4.3% 0.0% 0.0% 1.08
H 100.0% 0.0% 0.0% 0.0% 0.0% 1.00
E 89.6% 10.4% 0.0% 0.0% 0.0% 1.05
I 86.5% 13.5% 0.0% 0.0% 0.0% 1.07
C 100.0% 0.0% 0.0% 0.0% 0.0% 1.00


Test results

Table 2, consolidates the results for the 3 runs of the multicast ping test described above. It clearly indicates that multihop paths do form in the ten kids under the tree scenario.

Table 2 Consolidated result for the ten kids under the tree scenario
node DPLI stdev
A 1.07 0.08
C 1.05 0.07
B 1.07 0.08
E 1.04 0.06
D 1.06 0.07
G 1.05 0.07
F 1.05 0.07
I 1.04 0.05
H 1.06 0.07
J 1.06 0.08

Global result: 1.05 (stdev = 0.01)

The answer

Yes. It is true that multihop paths form even in a Kids under the Tree scenario. Exactly 9.99% of the data frames captured in this test were transmitted over a multihop path. Is it excessive? What are the root causes for this? Keep reading ;-)


Question #2 - Does PDM traffic self-interfere?

We already confirmed that multihop paths do form in the ten kids under the tree scenario, now it is time to investigate this deeper.

In the "ten nodes under the tree scenario" the measured was DPLI 1.05, maybe not too high, but can it become intolerable in case more nodes are added or more traffic are generated? Another suspicion worth investigating is that path discovery traffic can self-interfere. There is a lot of sense to this, since PDM is bursty traffic.

The test

To investigate this, we varied the route expiration time in the ten kids under the tree scenario and repeated the same test as above (*** link). We calculated the DPLI and also the percentage of data frames that were transmitted over multihop paths.

Test results

The self-interference of PDM traffic
Route exp time (s) Mean DPLI Stdev DPLI Multihop frames %
5 1.09 0.02 16.58
10 1.05 0.01 9.99
15 1.05 0.01 8.72
20 1.05 0.01 9.65
25 1.04 0.01 7.44
30 1.03 0.01 6.24

The answer

Yes. The more you have path discovery traffic the longer your paths will get. But why? That's question #3