Ejabberd resource tests: Difference between revisions

From OLPC
Jump to navigation Jump to search
(→‎benchmark results: add link to raw results.)
Line 59: Line 59:
=== benchmark results ===
=== benchmark results ===


[[Ejabberd_resource_tests/tls_comparison]]
* [[Ejabberd_resource_tests/tls_comparison]] -- comparing aspects of tries 6 and 7.


* [[Ejabberd_resource_tests/try_7]] -- identical conditions to [[Ejabberd_resource_tests/try_6| try 6]], but with the old SSL code.
* [[Ejabberd_resource_tests/try_7]] -- identical conditions to [[Ejabberd_resource_tests/try_6| try 6]], but with the old SSL code.
* [[Ejabberd_resource_tests/try_6]] -- up to 750 connections with shared roster and new SSL code.
* [[Ejabberd_resource_tests/try_6]] -- up to 750 connections with shared roster and new SSL code.

Revision as of 05:40, 21 November 2008

   Jabber: | Community Jabber Servers | Run a Jabber Server | Category:Jabber

The purpose of these tests

The XS school server is going to be installed in schools with more than 3000 students. In these large schools, ejabberd is crucial for functional collaboration. If all the students are using their laptops at once, ejabberd might be considerably stressed. These tests were run to find out how it runs in various circumstances.

Set up

The cpu of the server running ejabberd reports itself as "Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz". The server has 1 GB ram and 2 GB swap.

The client load was provided by [http://dev.laptop.org/git?p=users/guillaume/hyperactivity/.git hyperactivity]. Each client was limited in number of connections it could maintain (by, it seems, Telepathy Gabble or dbus), so several machines were used in parallel. Four of the client machines were fairly recent commodity desktops/laptops -- one was the server itself -- and four were XO laptops. The big machines were connected via wired ethernet and could provide up to 250 connections each, while the XOs were using mesh and providing 50 clients each. From time to time hyperactivity would fail with these numbers and have to be restarted.

It took time to work out these limits, so the tests were initially tentative. The graphs below, the script that made them, longer versions of these notes, and perhaps unrelated stuff can be found at [1].

In order to test, I had to add the line

{registration_timeout, infinity}.

to /etc/ejabberd/ejabberd.cfg (including the full-stop).

The memory usage numbers below were gathered by ps_mem.py, and the load average is as reported by top. These are not peak numbers, but approximately what ejabberd settled to after running for some time. For the record, the memory use reported by top track that of ps_mem.py, but was consistently a little higher (as if it were counting in decimal megabytes, though I am not sure if this is the case).

Logging and graphing scripts

The scripts that collected the information and made the graphs are stored in git.

benchmark results

The results below might be less trustworthy, as the shared roster was not always working.

Raw benchmark results

http://dev.laptop.org/~dbagnall/ejabberd-tests/ -- includes graphs.

Issues

  • Is pounding ejabberd every 15 seconds reasonable? A lighter load actually makes very little memory difference, but it probably saves CPU time.