Path Discovery Mechanism/Sanity

From OLPC
< Path Discovery Mechanism
Revision as of 19:08, 9 May 2008 by Carrano (talk | contribs) (New page: This page is dedicated to the following investigation: Does the Path Discovery Mechanism (PDM) behaves as expected in terms of the path length? There's been observations that the implemen...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This page is dedicated to the following investigation: Does the Path Discovery Mechanism (PDM) behaves as expected in terms of the path length?

There's been observations that the implemented PDM may wrongly result in unnecessarily long path lengths.

The canonical example is the ultra-dense mesh cloud of a classroom, where all the nodes are within the transmission range of each other. In such a scenario, does the PDM form multihop paths too often? How often?

And if so, can we expect that all the paths will be single hop in this scenario? Should we try to fix this or reduce this and how could we do it?

In trying to answer the questions above, others will came.

How data traffic and PDM traffic influence each other. Will the burstiness of PDM's traffic significantly degrade data throughput? And will data traffic prevent path formation or induce unnecessarily long paths?


The PDM sanity check #1

Let's start looking into the PDM sanity by confirming if the mechanism really behaves in unexpected ways.

For this initial step, we'll use a Ten Kids Under the Tree testbed in which where ten XOs are placed in a very dense configuration (less than 2 square-meters area) and every node are within transmission range of each other.

In such a situation one would expect the paths to be mostly, if not all, single-hop paths. Because a successful transmission at 54Mbps between any two nodes are theoretically always possible. A path with two or mode hops will have to be explained and, the longer the path, the more under optimal and suspicious the results would be.

The test

The tests were performed in a quiet environment (noise floor < -96dbm), and the used build was 703 (Update.1-703) with libertas firmware 22.p10. In order to use such firmware release, a special test kernel was used. Also, the following configurations were applied:

  • mesh ttl was set to 5
  • multicast pings were enabled

The workload used for this particular test was multicast pings (addressed to 224.0.0.1, default size, 1Hz). The test was performed three times, (1 minute for each test).

What are we measuring ?

One possibility would be just log the forwarding table of each node during the test and then check the path lengths. That's a good test. But this method was designed to be non-intrusive and, because of that, applicable to any traffic captures available (for instance from the Collaboration (*** link here) testbed).

So, to have a taste (or measure, if you will) of how (in)sane the PDM is in terms of path length, we'll analyse traffic captures and calculate two indexes for every path:

  • Data Path Length Index (DPLI)
  • Action Path Length Index (APLI)

Both indexes are calculated over the average hop count for each frame exchanged between two nodes (see example in table 1). As the name implies, the DPLI will use the data frames (in this case, icmp echo replies) and the APLI will use the PDM frames themselves (in this case, in the PREPs).

Of course the indexes are not strictly necessary (the table itself brings interesting information) and of course the indexes have no meaning by themselves. They are not actual path lengths, but an indicator of the actual path lengths that formed during the test. Apart from confirming that multihop frames are being formed, they will give us some baselines for future comparisons and make it easier to analyse overall results.

Table 1, shows the DPLIs for a node (B) in a real test. We can see that the paths to C, J and H are pure single hop paths. But for all other neighbours a longer path did exist at some point. One should keep in mind that the paths are constantly rediscovered because (1) of expiration (which defaults to 10 seconds) of a path or (2) of fail to forward traffic. We can also clearly note that the path from B to D was much longer than any other path in this test, and this may be worth investigating.


Table 1 Node B's DPLIs for each neighbour
neigh 1hop 2hops 3hops 4hops 5hops DPLI
G 90.0% 10.0% 0.0% 0.0% 0.0% 1.05
A 82.1% 17.9% 0.0% 0.0% 0.0% 1.09
J 100.0% 0.0% 0.0% 0.0% 0.0% 1.00
D 59.0% 24.0% 4.0% 6.0% 7.0% 1.33
F 87.1% 8.6% 4.3% 0.0% 0.0% 1.08
H 100.0% 0.0% 0.0% 0.0% 0.0% 1.00
E 89.6% 10.4% 0.0% 0.0% 0.0% 1.05
I 86.5% 13.5% 0.0% 0.0% 0.0% 1.07
C 100.0% 0.0% 0.0% 0.0% 0.0% 1.00


How did we judge the results?

Initially, we expect that both DPLIs and APLIs would be 1.0 or very close to 1.0. That would be the perfect scenario where only single hop frames were captured.

Test results

Here is what we found out on the preliminary tests. If you are impatient jump to ***.

Table 2 Consolidated result for the ten kids under the tree scenario
node DPLI stdev
A 1.07 0.08
C 1.05 0.07
B 1.07 0.08
E 1.04 0.06
D 1.06 0.07
G 1.05 0.07
F 1.05 0.07
I 1.04 0.05
H 1.06 0.07
J 1.06 0.08

Global result: 1.05 (stdev = 0.01)