# Circle positioning algorithm

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The Sugar mesh view consists of a number of icons, many of which are circular in nature. We would prefer that these icons not overlap each other, so that the user may easily see all of them. However, we would also like these icons to stay near some fixed position, so that when glancing from one screen to another, or at the same screen over time, one may quickly find a particular icon of interest.

Mathematically, then, the problem is as follows. We have a number of circles, labeled by $i\in[1\dots N]$. Each of these circles has a radius $r_i$ and an ideal location $\left\langle a_i, b_i\right\rangle$. However, each circle has been moved to a new location $\left\langle c_i, d_i\right\rangle$. Our goal is to choose these locations to minimize $\sum_i (a_i-c_i)^2 + (b_i-d_i)^2$, subject to the constraint that for each pair of circles $i$ and $j$, $\sqrt{(c_i-c_j)^2 + (d_i-d_j)^2} \geq r_i + r_j$.

Our cost function above is obviously quadratic in nature. If we square both sides of the constraint to yield the equivalent constraint $(c_i-c_j)^2 + (d_i-d_j)^2 \geq (r_i + r_j)^2$, we see that the constraint is quadratic as well. This problem is therefore a Quadratically constrained quadratic program.

Unfortunately, as that page notes, the generic QCQP problem is NP-hard. In particular, QCQP can be NP-hard when quadratic constraints are nonconvex. Unfortunately, the constraints in this case are indeed nonconvex, and so it is possible (though not certain) that this circle-packing problem is intractably hard.

One way to avoid the difficulty of solving a nonconvex optimization problem is to choose a convex subspace. This subspace excludes many valid solutions, but it reduces the problem to a convex optimization, which can typically be solved by a fast algorithm. In our case, replacing the quadratic contraints with a linearization turns the QCQP problem into a classic linearly constrained quadratic programming problem, for which there are many solver libraries available, such as CVXOPT.

There are many possible linearizations of this constraint. We must choose one that includes as many valid solutions as possible but no invalid (overlapping) solutions; the new constraint must be at least as tight as the old constraint. I have not found a rigorous method of linearizing constraints, but one choice that seems very good to me is $\left\langle c_i-c_j,d_i-d_j\right\rangle\cdot\left\langle a_i-a_j,b_i-b_j\right\rangle \geq \left|\left\langle a_i-a_j,b_i-b_j\right\rangle \right|\cdot (r_i + r_j)$. In words, this constraint demands that the projection of the distance between the centers of the circles onto the line connecting their ideal positions be large enough that they cannot overlap.

This constraint has two excellent properties. The first is that if any two circles are not contacted by a third circle, then they will be placed exactly optimally. The second is that the error in placement is second-order in the tangential offset, so as long as a circle is not moved sideways by a large fraction of its radius, it will be close to its optimal position.

I have implemented this algorithm in discus.py, using the CVXOPT package to solve the resulting quadratic program. This implementation also allows constraints representing an outer bounding rectangle.

A few caveats are in order. There may be no solution to the ideal problem; i.e. there may be no way to pack the circles into the available screen space. The process of linearizing the constraints excludes many possible solutions, and so even if there is a valid solution to the QCQP problem, our linearized QP problem may have no solution. In principle, quadratic programming problems can be solved in polynomial time, but I can make no claims regarding the speed of CVXOPT's solver code. Even if CVXOPT's solver code is polynomial time, and that polynomial is of relatively small order, our problem has at least $N(N-1)/2$ constraints to ensure that each pair of circles does not collide. Just writing down these constraints therefore necessarily requires quadratic time. So far, it is not clear what the true cost of these operations is for the numbers of circles in which we are interested.

Future directions include: dynamically shrinking the radii of all circles until they fit (tested, working), updating the constraints lazily as circles are added or removed, and only writing in those constraints that are likely to be relevant.

Additionally, for improved accuracy, the above optimization can be iterated, adjusting the constraints for each iteration with the determined optimum replacing $a$ and $b$. This is actually a classic approach: solving a nonconvex Quadratically Constrained Quadratic Program by Sequential Quadratic Programming, with each program in the sequence generated by a Reformulation-Linearization Technique. However, it represents a fairly expensive computation for a gain of little practical significance.