Ejabberd resource tests: Difference between revisions
(→Set up: add a link to the repo) |
(→Issues) |
||
(9 intermediate revisions by 2 users not shown) | |||
Line 21: | Line 21: | ||
The client load was provided by |
The client load was provided by |
||
[http://dev.laptop.org/git |
[http://dev.laptop.org/git/users/guillaume/hyperactivity/.git/ |
||
hyperactivity]. Each client was limited in number of connections it |
hyperactivity]. Each client was limited in number of connections it |
||
could maintain (by, it seems, [[Telepathy Gabble]] or [[dbus]]), so |
could maintain (by, it seems, [[Telepathy Gabble]] or [[dbus]]), so |
||
Line 55: | Line 55: | ||
The scripts that collected the information and made the graphs are |
The scripts that collected the information and made the graphs are |
||
stored in [http://dev.laptop.org/git |
stored in [http://dev.laptop.org/git/users/dbagnall/ejabberd-tests.git/ git]. |
||
=== benchmark results === |
=== benchmark results === |
||
====Comparisons==== |
|||
* [[Ejabberd_resource_tests/tls_comparison]] -- comparing aspects of tries 6 and 7. |
|||
====With shared roster==== |
|||
* [[Ejabberd_resource_tests/try_9]] -- With ejabberd 2.0.2 and postgres. |
|||
* [[Ejabberd_resource_tests/try_8]] -- With ejabberd 2.0.1 and postgres. |
|||
* [[Ejabberd_resource_tests/try_7]] -- identical conditions to [[Ejabberd_resource_tests/try_6| try 6]], but with the old SSL code. |
|||
* [[Ejabberd_resource_tests/try_6]] -- up to 750 connections with shared roster and new SSL code. |
* [[Ejabberd_resource_tests/try_6]] -- up to 750 connections with shared roster and new SSL code. |
||
* [[Ejabberd_resource_tests/try_5]] -- up to 450 connections with shared roster |
* [[Ejabberd_resource_tests/try_5]] -- up to 450 connections with shared roster |
||
Line 68: | Line 78: | ||
* [[Ejabberd_resource_tests/try_3]] |
* [[Ejabberd_resource_tests/try_3]] |
||
* [[Ejabberd_resource_tests/try_4]] Faulty -- shared roster was not working. |
* [[Ejabberd_resource_tests/try_4]] Faulty -- shared roster was not working. |
||
=== Raw benchmark results === |
|||
http://dev.laptop.org/~dbagnall/ejabberd-tests/ -- includes graphs. |
|||
=== Issues === |
=== Issues === |
||
* Is pounding ejabberd every 15 seconds reasonable? A lighter load actually makes very little memory difference, but it probably saves CPU time. |
* Is pounding ejabberd every 15 seconds reasonable? A lighter load actually makes very little memory difference, but it probably saves CPU time. |
||
ok |
Latest revision as of 18:41, 24 February 2011
The purpose of these tests
The XS school server is going to be installed in schools with more than 3000 students. In these large schools, ejabberd is crucial for functional collaboration. If all the students are using their laptops at once, ejabberd might be considerably stressed. These tests were run to find out how it runs in various circumstances.
Set up
The cpu of the server running ejabberd reports itself as "Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz". The server has 1 GB ram and 2 GB swap.
The client load was provided by [http://dev.laptop.org/git/users/guillaume/hyperactivity/.git/ hyperactivity]. Each client was limited in number of connections it could maintain (by, it seems, Telepathy Gabble or dbus), so several machines were used in parallel. Four of the client machines were fairly recent commodity desktops/laptops -- one was the server itself -- and four were XO laptops. The big machines were connected via wired ethernet and could provide up to 250 connections each, while the XOs were using mesh and providing 50 clients each. From time to time hyperactivity would fail with these numbers and have to be restarted.
It took time to work out these limits, so the tests were initially tentative. The graphs below, the script that made them, longer versions of these notes, and perhaps unrelated stuff can be found at [1].
In order to test, I had to add the line
{registration_timeout, infinity}.
to /etc/ejabberd/ejabberd.cfg (including the full-stop).
The memory usage numbers below were gathered by ps_mem.py, and the load average is as reported by top. These are not peak numbers, but approximately what ejabberd settled to after running for some time. For the record, the memory use reported by top track that of ps_mem.py, but was consistently a little higher (as if it were counting in decimal megabytes, though I am not sure if this is the case).
Logging and graphing scripts
The scripts that collected the information and made the graphs are stored in git.
benchmark results
Comparisons
- Ejabberd_resource_tests/tls_comparison -- comparing aspects of tries 6 and 7.
- Ejabberd_resource_tests/try_9 -- With ejabberd 2.0.2 and postgres.
- Ejabberd_resource_tests/try_8 -- With ejabberd 2.0.1 and postgres.
- Ejabberd_resource_tests/try_7 -- identical conditions to try 6, but with the old SSL code.
- Ejabberd_resource_tests/try_6 -- up to 750 connections with shared roster and new SSL code.
- Ejabberd_resource_tests/try_5 -- up to 450 connections with shared roster
The results below might be less trustworthy, as the shared roster was not always working.
- Ejabberd_resource_tests/try_1
- Ejabberd_resource_tests/try_2
- Ejabberd_resource_tests/try_3
- Ejabberd_resource_tests/try_4 Faulty -- shared roster was not working.
Raw benchmark results
http://dev.laptop.org/~dbagnall/ejabberd-tests/ -- includes graphs.
Issues
- Is pounding ejabberd every 15 seconds reasonable? A lighter load actually makes very little memory difference, but it probably saves CPU time.
ok