A Dose of Everything - Modulo Operator

November 24, 2021 | 9 minutes, 11 seconds


In some course, none in particular but likely calculus-related, you might encounter the function over the integers, \((-1)^n\). Usually in the context of series, this function has the effect of negating every odd term. The more astute realise it can be written as \(\cos(n\pi)\). In fact, in a fourier series context, this fact goes a long way to helping simplify series (along with similar facts).

Today we use this function as some inspiration. In a piecewise setting, we denote this function as:

\[ (-1)^n=\begin{cases} 1 & n\equiv0\pmod 2\\ -1 & n\equiv1\pmod 2 \end{cases}\]

Immediately this ropes in some number theory with the modulus operator and equivalence (one might like to describe it in terms of equivalence classes). Hence, we ask the question, can we write \(n\bmod 2\) in terms of \((-1)^n\)? The answer is yes, and we do so in a rudimentary way. Since we can't apply interpolation techniques here (yet), we'll resolve to making the cases look like the conditions.

\[ \begin{align} (-1)^n &=\begin{cases} 1 & n\equiv0\pmod 2\\ -1 & n\equiv1\pmod 2 \end{cases}\\ &=1+\begin{cases} 0 & n\equiv0\pmod 2\\ -2 & n\equiv1\pmod 2 \end{cases}\\ &=1-2\begin{cases} 0 & n\equiv0\pmod 2\\ 1 & n\equiv1\pmod 2 \end{cases}\\ &=1-2(n\bmod 2)\\ \end{align}\]

Solving for \(n\bmod 2\) gives \(\frac{1}{2}-\frac{1}{2}(-1)^n\).

This begs the question: what about a general \(n\bmod k\) formula? Well, obviously, this technique won't generalise, so we'll have to find another method. Namely, to start, instead of working backwards from \((-1)^n\) we should work forwards from \(n\bmod 2\) and apply the same method. Again, this won't work in general, so we'll have to find some nice property that'll give us the same result.

Roping in Complex Numbers

After some thinking, you might notice that \(-1\) has some very special properties. Firstly, when exponentiated, its behaviour is cyclical in nature. It is, by very definition and consequence of the previous property, bounded under the same operation. Furthermore, it is one of the 2nd roots of unity. We wonder why this would be significant, so we postulate:

Suppose, given \(n\in\mathbb{Z}\) and \(k\in\mathbb{Z}\setminus\{0\}\), there exists some \(z\in\mathbb{C}\) for which \(z^n=z^{n\bmod k}\). If there does exist such a number, then \(n\equiv m\pmod k\implies z^n=z^m\) by definition.

Since this is a piecewise blog (and for other reasons you'll see soon), we'll try and approach solving the above for \(z\) in a piecewise manner. We let \(f(z)=z^n-z^{n\bmod k}=0\), which is a polynomial in \(z\). We observe we only require \(k\) points to interpolate this such function over a given interval. Hence let \(n-lk\in\{0,1,\dots,k-1\}\) for \(l\in\mathbb{Z}\), to give us:

\[ f(z)=\begin{cases} z^{lk}-1 & n=lk \\ z^{lk+1}-z & n=lk+1 \\ \vdots & \vdots \\ z^{lk+k-1}-z^{k-1} & n=lk+k-1 \\ \star & \star \end{cases}\]

Factoring out the terms gives us:

\[ f(z)=(z^{lk}-1)\begin{cases} 1 & n-lk=0 \\ z & n-lk=1 \\ \vdots & \vdots \\ z^{k-1} & n-lk=k-1 \\ \star & \star \end{cases}\]

We can rewrite the terms as \(n-lk=m\implies n-lk\equiv m\pmod k\implies z^{n-lk}=z^m\), which will give us, after simplification:

\[ f(z)=(z^{lk}-1)z^{n-lk}=0\]

To find one such \(z\) that satisfies our relation, we might see that \(z^{lk}=1\) and choose \(l=1\); hence, \(z^k=1\) which is a \(k\)th root of unity. We'll pick the first non-\(1\) root which is \(z=\exp(\frac{2\pi i}{k})\). To see why \(z=0,1\) are not solutions, observe that to construct \(n\bmod k\), we have the following:

\[ n\bmod k=\begin{cases} 0 & n\equiv 0\pmod k\\ 1 & n\equiv 1\pmod k\\ \vdots & \vdots \\ k-1 & n\equiv k-1\pmod k\\ \end{cases}=\begin{cases} 0 & z^n=1\\ 1 & z^n=z\pmod k\\ \vdots & \vdots \\ k-1 & z^n=z^{k-1}\pmod k\\ \end{cases}\]

Suppose \(z=0\) or \(z=1\); then we produce a function that is not well defined as each condition would be identical (\(0^n=0\) for \(n\in\mathbb{Z}\setminus\{0\}\), \(1^n=1\) for all \(n\in\mathbb{Z}\)). This property is not true of the other roots (which you might be able to observe and prove yourself).

In any case, one might consider interpolating this formulation. If you do so for \(n\bmod 2\) you get the expected result. Applying the Lagrange approach to the general problem, however, gives us something more nasty:

\[ n\bmod k=\sum_{m=1}^{k-1}{m\prod_{\substack{j=0\\j\neq m}}^{k-1}{\frac{z^n-z^j}{z^m-z^j}}},\quad z=\exp(\frac{2\pi i}{k})\]

If you process \(n\bmod 3\) through Mathematica by using this formula, then explicitly asking for the real part and reducing to trig, you get this absolutely wonderful formula (https://cloud.ally.ws/s/r3xBzd8aTyatMdA/preview):

\[ n\bmod3=\frac{(-1)^n\sin(\frac{n\pi}{3})}{\sqrt{3}}+2\cos(\frac{n\pi}{3})^2\]

Connections to Multivariable Interpolation


Consider a multivariate interpolation, mapping \(\mathbb{R}^n\to\mathbb{R}\). One might consider the parameters of a function an \(n\)-dimensional vector; \(\vec{v}=(x_1,x_2,\dots,x_n)\). Fortunately for us, vector spaces have norms! We'll denote the norm of this vector space \(p(\vec{v})\).

We struggle to interpolate multivariate functions, so we'll instead attempt to apply a general method of reduction. Suppose in the \(k\)th case we have \(\vec{v}=\vec{a}_k\). Then \(\vec{v}-\vec{a}_k=0\implies p(\vec{v}-\vec{a}_k)=0\) by the definition of the norm. Instant, simple reduction. In the case of the \(\ell^2\) norm, we can simply use conditions \(p(\vec{v}-\vec{a}_k)^2=0\) instead.

For example, we take this problem:

\[ f(x,y)=\begin{cases} z_1 & x=x_1 & y=y_1 \\ z_2 & x=x_2 & y=y_2 \\ \vdots & \vdots & \vdots \\ z_n & x=x_n & y=y_n \\ \star & \star & \star \end{cases}\]

And reduce it to:

\[ f(x,y)=\begin{cases} z_1 & (x-x_1)^2+(y-y_1)^2=0 \\ z_2 & (x-x_2)^2+(y-y_2)^2=0 \\ \vdots & \vdots \\ z_n & (x-x_n)^2+(y-y_n)^2=0 \\ \star & \star \end{cases}\]

For which we can apply our regular techniques.


You keep telling yourself "\(\mathbb{C}\neq\mathbb{R}+i\mathbb{R}\)" over and over in your head. You keep telling yourself the complex numbers aren't vectors in disguise. It's all in vain. You're a lowly undergrad, and these notions persist, plaguing you for the rest of your undergrad career.

So obviously like any vector, a complex number must have a norm! Right! Right? Actually yes, the Euclidean norm. I win. But not because complex numbers are vectors. Well, according to Wikipedia, if you identify the complex plane with \(\mathbb{R}^2\)...

Suppose we have \(u=v\in\mathbb{C}\). Suppose for some odd reason we need to rewrite this equality. So, taking from our previous derivation, we have \(u-v=0\implies\abs{u-v}^2=0\). We'll square to remove square roots, yuck. This way, our equality is 100% real and (hopefully) simplifies down nicely.

It turns out that roots of unity actually massively benefit from this line of thinking. When I was definitely not up at 2am trying to sleep only to be awoken by this line of thought, I realised this. Take \(z=\exp(\frac{2\pi i}{k})\) and the condition \(z^n=z^m\). We'll do the same here, using \(\abs{z^n-z^m}=0\), which gives:

\[ \abs{\left(\cos(\frac{2n\pi}{k})-\cos(\frac{2m\pi}{k})\right)+i\left(\sin(\frac{2n\pi}{k})-\sin(\frac{2m\pi}{k})\right)}=0\]

If we expand, we get the following:

\[ 1=\cos(\frac{2n\pi}{k})\cos(\frac{2m\pi}{k})+\sin(\frac{2n\pi}{k})\sin(\frac{2m\pi}{k})\]

Which, if you know your trig addition formulas well, you'll notice is just:

\[ \cos(\frac{2(n-m)\pi}{k})=1\]

And so we've simplified our conditions greatly by abusing cosine, since the cases in our construction of \(n\bmod k\) use \(z^n=z^m\). Note that the form prior to the last is equally useful, and may end up simplifying things anyway (on a case by case basis). This same formula can be written (via \(\cos(a)-\cos(b)\)):

\[ \sin(\frac{(n-m)\pi}{k})^2=0\]


We can construct \(n\bmod k\) the following way:

\[ n\bmod k=\begin{cases} 0 & n\equiv 0\pmod k \\ 1 & n\equiv 0\pmod k \\ \vdots & \vdots \\ k-1 & n\equiv k-1\pmod k \end{cases}\]

For example, using the cosine formulation for all conditions, we can apply the usual Lagrange approach to give:

\[ n\bmod k=\sum_{m=1}^{k-1}{m\prod_{\substack{j=0\\j\neq m}}^{k-1}{\frac{\cos(\frac{2(n-j)\pi}{k})-1}{\cos(\frac{2(m-j)\pi}{k})-1}}}\]

In all honesty, the Newton method will likely yield far nicer results, with only minor simplification. With that being said, it's important to understand that any approach that sufficiently connects these conditions nicely enough will work as an approach to finding a form for \(n\bmod k\). Over the reals, the accepted non-elementary formulation is \(n-k\left\lfloor\frac{n}{k}\right\rfloor\).

Realistically, we've outlined several major, but also intermediary, ways to formulate the conditions:

  1. \(z^n=z^m, z=\exp(\frac{2\pi i}{k})\)
  2. \(\cos(\frac{2n\pi}{k})\cos(\frac{2m\pi}{k})+\sin(\frac{2n\pi}{k})\sin(\frac{2m\pi}{k})=1\)
  3. \(\cos(\frac{2(n-m)\pi}{k})=1\)
  4. \(\sin(\frac{(n-m)\pi}{k})^2=0\) - remains squared so that \((3)\iff(4)\) holds.

Which one is used is entirely based on circumstance, where some conditions may combine to form nicer expressions and vice versa. Realistically, only the first formulation will give exponentiation-based forms, but is far more difficult to simplify.

Proof \(n\equiv m\pmod k\iff z^n=z^m\).

Left to right: By definition, \(n\equiv m\pmod k\implies z^n=z^m\) for \(z=\exp(\frac{2\pi i}{k})\) since \(z\) is defined to be a number such that this property holds for \(n,m\in\mathbb{Z}\).

Right to left: Suppose \(z^n=z^m\), then \(z^{m}(z^{n-m}-1)=0\). This gives \(z^{n-m}=1\); \(\exp(\frac{2(n-m)\pi i}{k})=1\) or \(\frac{2(n-m)\pi}{k}=2j\pi\).

Hence, \(n-m=jk\implies n=m+jk, j\in\mathbb{Z}\). By definition, this means \(n\equiv m\pmod k\).

Corrections and Additions

For corrections and additions, please contact me via the about page. As this page isn't terribly large, I'm definitely open to major suggestions.