Intersecting Line Segments In Cyclotomic Rings Without Tears

A while ago I wrote a post about how I re-discovered cyclotomic integer rings, which was actually just in preparation for some more practical ideas that I wanted to develop. Now, as it often happens, It took a while to find enough time to pursue this further, but here comes the promised part 2 (you might want to (re-)read or skim part 1 first, if you are unfamiliar with the topic), and there will definitely be a part 3.

In the meantime, I have been working on an easy-to-use Rust implementation of all the ideas that I presented and sketched out. It was quite some effort[1] for me to port the prototype Python code presented in the last post to Rust, but I think it was absolutely worth it. Now everything is working much faster, is properly tested, and documented. In fact, I am satisfied enough with the result that I decided to package it up into my first Rust crate, tilezz. While I still have the package not where I want it to be for a neat 0.1.0 release, I decided that another concepts post is long overdue.

In the previous post you learned a bit about cyclotomic rings and fields, and why you might consider using them as points in the context of 2D geometry. In this post, I want to discuss the next crucial building block in my larger project. Before we can talk about polygons, we should talk about line segments.

Line Segment Intersection 101

There are many almost-correct solutions all over the internet, here I want to show you one solution that is not only handling all edge-cases, but can even be generalized to work over cyclotomic rings.

Let $(a, b)$ and $(c, d)$ be two non-degenerate line segments in the plane, i.e. $a \neq b, c\neq d$ and for now, you can assume that $a,b,c,d \in \mathbb{R}^2$. Let us first take a look at the different cases.

  1. parallel case – there are no intersection points
  2. colinear case
    1. the segments do not overlap
    2. the segments touch in a single endpoint they have in common
    3. the segments overlap in an interval (which is itself a line segment)
  3. normal case
    1. the segments do not overlap (even though their extended lines do)
    2. the segments intersect in a point inside both segments
    3. an endpoint of one segment intersects some inner point of the other
    4. the segments intersect in an endpoint of both segments

The half-solution winning the prize for brevity and elegance is this one. However, it does not handle case 2. and 3.4 correctly. After looking at many other solutions and different implementations for various checks and operations, I mashed up my own variation of a solution to this common exercise. Without going into detail (because this is definitely not the interesting part of this post), here is a Python snippet that correctly handles all cases.[2]

from dataclasses import dataclass

@dataclass
class Point:
    x: float
    y: float

def wedge(a: Point, b: Point) -> float:
    """Return wedge product of two vectors."""
    return (a.x * b.y) - (a.y * b.x)

def dot(a: Point, b: Point) -> float:
    """Return dot product of two vectors."""
    return (a.x * b.x) + (a.y * b.y)

def line_through(p: Point, q: Point) -> tuple[float, float, float]:
    """Return (a, b, c) for line ax + by + c = 0 though points P and Q."""
    return (p.y - q.y, q.x - p.x, p.x * q.y - q.x * p.y)

def is_on_line(p: Point, l: tuple[float, float, float]) -> bool:
    """Return whether point P is on the line given by the coefficients."""
    (a, b, c) = l
    return a * p.x + b * p.y + c == 0

def is_between(a: Point, b: Point, c: Point) -> bool:
    """Return whether point B is strictly between A and C."""
    ba = Point(a.x-b.x, a.y-b.y)
    cb = Point(b.x-c.x, b.y-c.y)
    return wedge(ba, cb) == 0 and dot(ba, cb) > 0

def is_ccw(a: Point, b: Point, c: Point) -> bool:
    """Return whether line segment AB is counter-clockwise from AC."""
    ab = Point(b.x-a.x, b.y-a.y)
    ac = Point(c.x-a.x, c.y-a.y)
    return wedge(ab, ac) > 0

def intersect(a: Point, b: Point, c: Point, d: Point) -> bool:
    """Return true if line segments AB and CD intersect."""
    if a == c or a == d or b == c or b == d:
        return True
    l_ab = line_through(a, b)
    if is_on_line(c, l_ab) and is_on_line(d, l_ab):
        return is_between(a, c, b) or is_between(a, d, b)
    return is_ccw(a,c,d) != is_ccw(b,c,d) and is_ccw(a,b,c) != is_ccw(a,b,d)

# Some test cases using this point layout: E F
#                                          A B C D
A, B, C, D, E, F = Point(0, 0), Point(1, 0), Point(2, 0), Point(3, 0), Point(0, 1), Point(1, 1)
print(f"intersect ({A}, {B}) and ({E}, {F}): {intersect(A, B, E, F)} (exp: False)")  # case 1
print(f"intersect ({A}, {B}) and ({C}, {D}): {intersect(A, B, C, D)} (exp: False)")  # case 2.1
print(f"intersect ({A}, {B}) and ({B}, {C}): {intersect(A, B, B, C)} (exp: True)")  # case 2.2
print(f"intersect ({A}, {C}) and ({B}, {D}): {intersect(A, C, B, D)} (exp: True)")  # case 2.3a
print(f"intersect ({A}, {D}) and ({B}, {C}): {intersect(A, D, B, C)} (exp: True)")  # case 2.3b
print(f"intersect ({A}, {B}) and ({F}, {D}): {intersect(A, B, F, D)} (exp: False)")  # case 3.1
print(f"intersect ({A}, {F}) and ({B}, {E}): {intersect(A, F, B, E)} (exp: True)")  # case 3.2
print(f"intersect ({A}, {C}) and ({B}, {F}): {intersect(A, C, B, F)} (exp: True)")  # case 3.3
print(f"intersect ({A}, {E}) and ({A}, {B}): {intersect(A, E, A, B)} (exp: True)")  # case 3.4

Great, now we have this out of the way and can proceed to the interesting part.

From the Cartesian Plane to Cyclotomic Rings

(NOTE: From here on I assume that you have read part 1 of this article series)

Note that the code above only makes use of addition, multiplication, equality, and greater-than-zero. As you will see, it will allow us to use it in a more general setting than the normal 2D plane, such as suitable constructible cyclotomic rings.

In the rest of the post the distinction between rings and fields does not matter – the only caveat is that you can detect an intersection in the ring, but only in the field you can actually always compute the intersection point. Regardless of the approach you use, in general you will need division in some computation step. I don’t know whether this is formally proven somewhere, but I have a strong feeling that this should be the case (probably some textbook result in algebraic geometry), as I found not a single formula for the intersection point that can completely avoid using division.

Can you guess where I spent most of the time on while figuring this out? It is the greater-than-zero operation! I explained in the last post how you can represent constructible cyclotomic fields as integer-valued coefficient vectors over certain square roots, which are used as linearly independent pseudo-dimensions. Addition and equality work component-wise and therefore are trivial, multiplication is a little bit cumbersome, but I already explained how it can be solved without using any general-purpose symbolic expression engine. So the one missing operation to make the code above work over cyclotomics is a greater-than-zero test.

Wait, what? But cyclotomic numbers are complex-valued! How does this even make sense? Valid objection, but if we look more carefully, we see that we only need to check if a cyclotomic number lying on the real line is greater than zero. Note that there is a single $\geq 0$ check in the code above, and it is applied to the result of the dot product, which is always real-valued.

If this is not clear, consider that we are using the real and imaginary dimensions of the complex plane for our X and Y coordinates, i.e. for a two numbers $a$ and $b$ in the same cyclotomic field, we compute the dot product as $\mathfrak{Re}(a)\mathfrak{Re}(b) + \mathfrak{Im}(a)\mathfrak{Im}(b)$. This means that imaginary parts either cancel out, or are multiplied by 0 and vanish, leaving us with a real-valued cyclotomic number as the result.

As the dot product of a cyclotomic number with itself gives us the square of the absolute value, such a procedure is also of interest whenever we want to sort or compare numbers by their absolute value. Even though we cannot compute the absolute value, because cyclotomic fields are not quadratically closed, in many algorithms its square also can be used.

Because we represent such numbers symbolically (i.e. not evaluating the square roots), we essentially do not perform the last step of the dot product, i.e. the sum over the terms. So unlike the scalar-valued norm in the classical setting, the result here is again a coefficient vector. But, as noted above, we know it will be real-valued – and such numbers are totally ordered, because $\mathbb{R}$ is. So to compute symbolically whether $||a||_2 > ||b||_2$ we only need to check that $a\odot a - b \odot b > 0$, where $\odot$ is the dot product, which in this setting happens to be the same as the Hadamard product.

A Simple Instance Of The Square-Root Sum Problem

Now as we are asking the question for real-valued constructible cyclotomic numbers, we have reduced the greater-than-zero check to the square-root sum problem. The good news is that this is a well-studied problem, but the bad news is that in general (i.e. without restrictions on the form or number of terms) this problem is pretty tricky and there are even still open questions concerning its algorithmic complexity! It is most definitely is not something you can quickly implement correctly. So what to do?

Well, nobody actually said that we have to solve the general problem! For each fixed cyclotomic field we have a fixed integral basis. If the fields you work with have at most 4 distinct square roots, such as $\mathbb{Z}[\zeta_{24}]$ (which only needs $\sqrt{1}, \sqrt{2}, \sqrt{3}$ and $\sqrt{6}$ as the set of square roots forming a linearly independent integral base), you are in luck – then the sign of the number can be determined using nothing but high-school level algebra! We can set up inequalities and transform them until no square roots remain and we can simply evaluate the expression over good old integers or rationals.[3]

Two Square Roots

Now, skipping the completely trivial cases, we can take a look at the cases involving a sum of two terms involving distinct squarefree coprime roots. Let $a,b\in\mathbb{Z}, m>1, n>1$. If you want to know whether $a\sqrt{n} + b\sqrt{m} > 0$, there are the following possibilities:

  1. $a = 0 \lor b = 0 \Rightarrow$ trivial (actually not really two terms)
  2. $\mathrm{sgn}(a) = \mathrm{sgn}(b) \Rightarrow$ trivial (just return the sign)
  3. $\mathrm{sgn}(a) \neq \mathrm{sgn}(b) \Rightarrow$ two symmetric cases (track the sign)
    1. $a>0$ and $b<0$
    2. $a<0$ and $b>0$

Now for case 3.1 we have:

$$a\sqrt{n} + b\sqrt{n} > 0\Leftrightarrow na^2 > mb^2 \Leftrightarrow na^2 - mb^2 > 0$$

and for case 3.2 symmetrically:

$$a\sqrt{n} + b\sqrt{n} > 0 \Leftrightarrow na^2 < mb^2 \Leftrightarrow ma^2 - nb^2 > 0$$

in addition to that we also have the cases where $a\sqrt{n} + b\sqrt{n} < 0$, which we can treat like $\leq 0$ (i.e. just invert the result of $> 0$), assuming that cases 1. and 2. are already excluded (specifically, the case $a = b = 0$).[4]

After thinking through and looking at all these cases, we can condense the whole evaluation with the messy symmetric cases into a single nice expression that looks almost too obvious[5] (but does not generalize naively):

$$ \mathrm{sgn}(a\sqrt{n} + b\sqrt{n}) = \mathrm{sgn}(\mathrm{sgn}(a)a^2 n + \mathrm{sgn}(b)b^2 m) $$

Now this is not a kind of equation I have seen too often in math literature, but in practical implementations you will often want to provide a sign operation instead of using different combinations of $<, =, >$, etc.

Four Square Roots (With Side Constraints)

After this warm-up, I will skip the case of three roots (because no cyclotomic field has a base of exactly three roots) and move on directly to the case of four.

Now our expression has coefficients $a,b,c,d\in\mathbb{Z}\setminus\{0\}$ and distinct coprime squarefree constants $k,m,n,l \in \mathbb{N}$. Furthermore, as this is intended for the usage with cyclotomic fields, we will also assume that $k=1$ and $l = mn$, and I leave the completely general solution as an exercise.

First separate the roots into two expressions with two roots. This can be done in multiple ways symmetrically, I just picked one natural possibility:

$$ \begin{align*} a\sqrt{k} + b\sqrt{m} + c\sqrt{n} + d\sqrt{mn} &> 0 \\ b\sqrt{m} + c\sqrt{n} &> -(a\sqrt{k} + d\sqrt{mn}) \end{align*} $$

Now use the solution above to check the sign of the left and right side independently. If we have that

$$ \mathrm{sgn}(b\sqrt{m} + c\sqrt{n}) = \mathrm{sgn}(a\sqrt{k} + d\sqrt{mn}) $$

then we are done and can just return the sign. Otherwise, we have the non-trivial case where the expressions contribute with opposed sign and we have to analyze further. Now w.l.o.g. we assume that the term on the left, i.e. $b\sqrt{m} + c\sqrt{n}$, is positive and the term on the right is negative.

We will solve this similarly as above, by squring both sides, moving around terms and carefully watching the signs:

$$ \begin{align*} b\sqrt{m} + c\sqrt{n} &> -(a + d\sqrt{mn}) \\ b^2 m + c^2 n + 2bc \sqrt{mn} &> a^2 + d^2 mn + 2ad \sqrt{mn} \\ b^2 m + c^2 n - d^2 mn - a^2 &> 2(ad - bc) \sqrt{mn} \\ \left(b^2 m + c^2 n - d^2 mn - a^2\right)^2 &> 4mn(ad - bc)^2 \\ \end{align*} $$

This is all we need – all roots are eliminated and we are left with an expression involving only integers (or rationals, if you are working in a field). So the result of the sign function is given by:

$$ \mathrm{sgn}(b\sqrt{m} + c\sqrt{n})\cdot\mathrm{sgn}(\left(b^2 m + c^2 n - d^2 mn - a^2\right)^2 - 4mn(ad - bc)^2 ) $$

This expression might not look as natural as the one for the simpler case, but it similarly condenses all the logic of case-splitting that we would normally have to do. The left term of the product accounts for the cases where the signs of the two subexpressions are flipped, and the right term is just what we derived in the inequalities above.

Summary

The operation of checking the sign of a number might look harmless on first sight, but can cause quite some headache when you work in more limited structures such as rings and fields, and the general square-root sum problem is surprisingly difficult.

However, for sufficiently nice cyclotomic rings, we have seen how the problem actually can still be solved rather easily – the main difficulty here was managing the explosion of symmetric cases, which requires tracking of signs of our input coefficients and certain subexpressions. But once this is all done, we can finally do useful things such as comparing norms or testing for line intersections!

In the next post of this series, I am going to finally talk about simple polygons. As already teased in the previous post, it will be not only about lifting concepts and algorithms from 2D geometry to cyclotomic rings, but it will also involve a non-standard representation of polygonal chains and polygons as sequences of discrete angles, which are interpreted like a minimalistic version of turtle graphics. This might look like a weird design choice, but has some interesting features and algorithmic advantages in the context of tiles and tilings, for example to compute candidate match sites along the boundary of two tiles, or to glue them together efficiently.


  1. I got lost in trait and macro hell for a while, and probably the way I did it is not the most elegant or idiomatic. However, I feel like I learned not only a great deal about cyclotomics, but also about non-trivial Rust development. I am still not happy about many implementation details, but I think the user-facing API is getting pretty nice.

  2. I ported this back from my Rust crate to make it more accessible. Unlike the Rust code, the Python snippet is not battle-tested.

  3. However, if you have 5 or more distinct square roots, this approach will not work. Sadly I could not find a reference for this (I remember seeing this in some paper or textbook), but you can try, and you will quickly see how using the same approach with more than 4 square roots will not reduce the number of roots in the expressions.

  4. Here we siltently make another important non-trivial assumption – that this expression can not sum to zero for the non-degenerate case where both $a, b \neq 0$. But luckily, there is a result saying that this is actually always the case in our setting.

  5. The funny story here is that initially, I was just messing around and just came up with the correct expression intuitively. To my big surprise, it seemed to actually do the job! But to confirm that this is not just working by accident in simple cases, I did the case analysis that you see here.