# An Adventure in Rewriting Polynomials

## October 20, 2021 | 12 minutes, 47 seconds

# Introduction

APOs are something I've been focusing on for a while now, mainly in terms of how we can write and manipulate them in ways that create functions that adhere to our specifications. Interpolation is a special case of this. When we talk about univariate interpolation in the traditional sense, we're discussing:

The lowest degree polynomial which satisfies a given number of points.

There are two things to unpack here; "polynomial" and "lowest degree". Recall that the general form of an anonymous function is:

\[ \phi^\star(x)=\phi(x)+[\phi(x)]r(x)\]

Where \(\phi(x)\) is a function satisfying the given points, \([\phi(x)]\) is zero at those given points and \(r(x)\) is any arbitrary function. This function generates an infinite family of functions, a subset of which can be derived using our piecewise techniques.

In order to reduce of our infinite family functions to a single function, we need more information. The fact our functions are polynomials reduces this family significantly. Furthermore, the "lowest degree" requirement reduces this even more so; for \([\phi(x)]\) to be a polynomial, it must have as many roots as the number of points given to it; hence its degree is automatically greater than the lowest degree polynomial \(\phi(x)\). This, in fact, refers to the 'uniqueness of the interpolating polynomial' theorem.

What we have, but seldom in depth, discussed, is how we *choose* to write these polynomials. Granted, we have discussed the Lagrange approach and Newton approach in the context of APOs (producing Lagrange and Newton-form polynomials respectively), but not particularly formally in the latter case.

In fact, today we'll introduce the piecewise approach to generating Newton-Form polynomials in the pure sense and then apply, and extend, a similar notion to a problem: how to write a "perfect power" (\(x^n\)) in terms of "falling powers" (\(x^{\underline{n}}\)). Don't worry if you don't know what this means just yet; we'll get to it later.

# Newton-Form Polynomials

As discussed previously, one is able to produce a Newton-form polynomial by following the naïve algorithm:

- Write out the interpolation problem in piecewise form.
- Subtract out a value corresponding to interpolated point(s)/piece values.
- Factor out the root from the piecewise object.
- Expand any brackets using the distributive property of multiplication.
- Repeat 2-5 until one piece remains; the remaining piecewise object becomes the value of this piece.

In a piecewise sense, this algorithm becomes 'impure' at step 4; we aren't in fact generating a Newton polynomial, we're generating a nested polynomial (if this step was removed). Instead, we perform the following:

- Write out the interpolation problem in piecewise form.
- Take a subset of \(n-1\) pieces (where \(n\) is the number of pieces in the outer piecewise object) and condense these into a single piecewise object. The outer condition becomes the product of the roots of the piece conditions inside the nested piecewise object.
- Repeat 2 until each (nested) piecewise object contains two pieces each.
- Perform the interpolation on the deepest object, then subtract it out of the piecewise object and evaluate at relevant points. Repeat until the interpolation is complete.

Ultimately, we'll have a polynomial in the form:

\[ \phi^\star(x)=a_1(x-x_1)+a_2(x-x_1)(x-x_2)+\dots+a_n(x-x_1)(x-x_2)\cdots(x-x_n)\]

Where the coefficients can be given by this derivation, or divided differences. Let us give an example:

\[ \begin{align} x^3 &=\begin{cases} -1 & x=-1 \\ 0 & x=0 \\ 1 & x=1 \\ 8 & x=2 \\ \star & \star \end{cases} \\ &=\begin{cases} \begin{cases} -1 & x=-1 \\ 0 & x=0 \\ 1 & x=1 \\ \star & \star \end{cases} & x(x-1)(x+1)=0\\ 8 & x=2 \\ \star & \star \end{cases} \\ &=\begin{cases} \begin{cases} \begin{cases} -1 & x=-1 \\ 0 & x=0 \\ \star & \star \end{cases} & x(x+1)=0\\ 1 & x=1 \\ \star & \star \end{cases} & x(x-1)(x+1)=0\\ 8 & x=2 \\ \star & \star \end{cases} \\ &=\begin{cases} \begin{cases} x & x(x+1)=0\\ 1 & x=1 \\ \star & \star \end{cases} & x(x-1)(x+1)=0\\ 8 & x=2 \\ \star & \star \end{cases} \\ &=x+\begin{cases} \begin{cases} 0 & x(x+1)=0\\ 1-1 & x=1 \\ \star & \star \end{cases} & x(x-1)(x+1)=0\\ 8-2 & x=2 \\ \star & \star \end{cases} \\ &=x+\begin{cases} 0 & x(x-1)(x+1)=0\\ 6 & x=2 \\ \star & \star \end{cases} \\ &=x+x(x-1)(x+1)\\ \end{align}\]

While this simplified out nicely, not all problems will. This is also, as seen, fairly tedious to do. Hence the existence of the method of divided differences, which is a recursive method of generating the coefficients of all the relevant polynomial factors.

# Introducing Falling Powers (Factorials)

We define the falling powers/factorials like so (where \(n\in\mathbb{Z}^+\)):

\[ x^{\underline{n}}=x(x-1)(x-2)\dots(x-(n-1))\]

We might as well introduce rising factorial, also known as the Pochhammer function, too, although they won't be used here:

\[ x^{\overline{n}}=x(x+1)(x+2)\dots(x+(n-1))\]

Furthermore when \(n=0\) we define both of these to be the empty product; \(1\).

We notice immediate that, for integer \(x=n\), \(n^{\underline{n}}=n!\). There are a variety of properties associated with both the falling factorial and rising factorial, which can be found here, but for today's post we'll be focusing on a slightly more naïve approach to an interesting problem.

Just by observation, it shouldn't be (comparatively) difficult to expand on the falling factorials to get them in terms of 'perfect' powers. And while this could be its own inherently interesting problem, we'll instead focus on another: the opposite. That is: How do we write \(x^n\) for some positive integer \(n\) in terms of falling factorials?

One approach would be to expand the expression given by:

\[ x^n=a_0x^{\underline{0}}+a_1x^{\underline{1}}+\dots+a_nx^{\underline{n}}\]

Noting that the highest degree falling factorial, \(x^{\underline{n}}\) has degree \(n\) and therefore has, by observation, a coefficient of \(1\). Irrespective of this, trying to expand this expression out would be tedious (oh the irony -- you'll see why). Instead, we notice that:

\[ x^n=a_0+a_1x+a_2x(x-1)+\dots+a_nx(x-1)(x-2)\cdots(x-(n-1))\]

If you have keen observation skills, you'll notice this looks *a lot* like a Newton polynomial. In fact, it is. And this has beautiful implications. In particular, we realise that the actual method of writing \(x^n\) is pretty much irrelevant, despite that being our end goal.

There are some important relationships to be made here: Firstly, the coefficients of this polynomial are actually called the Stirling Numbers of the \(2^{\textrm{nd}}\) kind. Secondly, this means that ultimately, Newton's method of divided difference are *related* to the Stirling numbers. Lastly, the fact that the falling factorials form a polynomial basis. It is, in fact, possible to rewrite every polynomial in terms of factorial powers, as we'll *explicitly* see via our derivation.

# Recursive Hell & Polynomials as Falling Powers

One important observation to make is that a polynomial \(x^n\), in order to be written as a linear combination of falling powers, must be interpolated at points:

\[ x=0,1,\dots,n\]

Furthermore, we must interpolate for the lowest degree polynomial. That is, we write the following:

\[ x^n=\begin{cases} 0 & x=0 \\ 1 & x=1 \\ 2^n & x=2 \\ \vdots & \vdots \\ n^n & x=n \\ \star & \star \end{cases}\]

Okay, so this looks simple enough but probably difficult to tackle the same way we did before. We attempt to nest this problem once:

\[ x^n=\begin{cases} \begin{cases} 0 & x=0 \\ 1 & x=1 \\ 2^n & x=2 \\ \vdots & \vdots \\ (n-1)^n & x=n-1 \end{cases} & x^{\underline{n}}=0\\ n^n & x=n \\ \star & \star \end{cases}\]

Immediately, we into a problem. That is, we don't have an expression for the inner piecewise object. We could, of course, write the nested piecewise object as \(x^n\) also, but that would lead us nowhere. We could also try to squeeze some expression out of it. That would be difficult. Instead, let us introduce a generalised function:

\[ T_{n,k}(x)=\begin{cases} 0 & x=0 \\ 1 & x=1 \\ 2^n & x=2 \\ \vdots & \vdots \\ k^n & x=k \\ \star & \star \end{cases}\]

Then we can do the same thing, *but* go further:

\[ T_{n,k}(x)=\begin{cases} \begin{cases} 0 & x=0 \\ 1 & x=1 \\ 2^n & x=2 \\ \vdots & \vdots \\ (k-1)^n & x=k-1 \end{cases} & x^{\underline{k}}=0\\ k^n & x=k \\ \star & \star \end{cases}=\begin{cases} T_{n,k-1}(x) & x^{\underline{k}}=0 \\ k^n & x=k \end{cases}\]

And just like that, we have a recursive piecewise relation. Amazing! We also take note that \(T_{n,1}(x)=x\). Continuing along our derivation, we reach:

\[ \begin{align} T_{n,k}(x) &=T_{n,k-1}(x)+\begin{cases} 0 & x^\underline{k}=0 \\ k^n - T_{n,k-1}(k) & x=k \\ \star & \star \end{cases} \\ &=T_{n,k-1}(x)+x^\underline{k}\begin{cases} \frac{k^n - T_{n,k-1}(k)}{k^\underline{k}} & x=k \\ \star & \star \end{cases} \\ &=T_{n,k-1}(x)+\frac{k^n - T_{n,k-1}(k)}{k!}x^\underline{k} \end{align}\]

What we'll do now is expand recursively on the term \(T_{n,k-1}(x)\):

\[ T_{n,k}(x)=x^\underline{1}+\sum_{i=2}^{k}{\frac{i^n-T_{n,i-1}(i)}{i!}x^\underline{i}}\]

Noting that this is still a recurrence relation (\(T_{n,i-1}(i)\)). We also note that this equation is identically given by following through with our Newton algorithm as given before:

\[ T_{n,k}(x)=T_{n,1}(x)+\begin{cases} 0 & x^\underline{2}=0 \\ 2^n-T_{n,1}(2) & x=2 \\ \star & \star \end{cases}+\begin{cases} 0 & x^\underline{3}=0 \\ 3^n-T_{n,2}(3) & x=3\\ \star & \star \end{cases}+\dots+\begin{cases} 0 & x^\underline{k}=0 \\ k^n-T_{n,k-1}(k) & x=k\\ \star & \star \end{cases}\]

It's here we take a step back; it is not trivial to solve this recurrence relation independently. At all. Actually it's a pain in the ass. Don't do it. Like actually don't even try. Instead, we consider defining:

\[ T_{n,i-1}(i):=C_n(i)=\begin{cases} 0 & i=0 \\ 1 & i=1 \\ 2^n & i=2 \\ \vdots & \vdots \\ (i-1)^n & i=i-1 \\ \star & \star \end{cases}\]

Which gives us, noting that \(C_n(1)=0\) by definition:

\[ T_{n,k}(x)=\sum_{i=1}^{k}{\frac{i^n-C_n(i)}{i!}x^\underline{i}}\]

And therefore we no longer recursively define \(T_{n,k}(x)\). The only matter is finding \(C_n(i)\) for general \(i\). In fact, we can just use any interpolation method available to us; for computational purposes. Symbolically, doesn't matter too much. For our formulation, we'll use the lagrange method, which I'll skip over, giving:

\[ C_n(i)=\sum_{p=0}^{i-1}{p^n\prod_{\substack{q=0\\q\neq p}}^{i-1}{\frac{i-q}{p-q}}}\]

I prefer not to write the whole formula at once. It's not very nice. However, the relevant formulation we're looking for is:

\[ x^n=T_{n,n}(x)=\sum_{i=1}^{n}{\frac{i^n-C_n(i)}{i!}x^\underline{i}}\]

As a limitation of our formulation, this definition works only for \(n\in\mathbb{Z}^+\). We in fact get \(x^0=0\) if \(n=0\)...

# Ideas

We mentioned changed of basis of polynomials earlier. While my knowledge doesn't extend to this topic at all, we know that we can write any polynomial as a linear combination of "perfect powers":

\[ P(x)=\sum_{n}{a_nx^n}=\sum_{n}{a_nT_{n,n}(x)}\]

Therefore, we can also write this polynomial as:

\[ P(x)=\sum_{m}{b_mx^{\underline{m}}}\]

Where this can be written when our polynomials are factored by falling power. We know we can do this as each power is interpolated at the same points, and end up being a sum of ascending/descending falling powers. *This* is a change of basis; the basis we use become falling factorials rather than individual powers.

Another thing we noted earlier is that the coefficients of each falling power make up the Stirling ling numbers of the second kind. That is,

\[ S(n,k)=\frac{k^n-C_n(k)}{k!}\]

Now, as far as definitions go, ours isn't terribly far off. In fact, these are formulated elsewhere as:

\[ S(n,k)=\frac{1}{k!}\sum_{i=0}^{k}{(-1)^i\begin{pmatrix}n\\k\end{pmatrix}(k-i)^n}\]

What I find absolutely beautiful is the term \(\frac{k^n-C_n(k)}{k!}\) -- despite the massive size of the exponential and function \(C_n(k)\) for large values of \(k\) and \(n\), this term finds itself much smaller than any of them, and always an integer.

In fact, the Stirling numbers are therefore related to Newton's divided differences; the divided differences method gives the coefficients of the falling powers. As do the Stirling numbers. It thus stands to reason that divided differences in this very specific example give the Stirling numbers also (and would also be a pretty viable way of generating them, given how simple divided differences is).

Let's not forget how combinatoric-y this is, but also both the algebraic and analytic implications. We love it.

# Other Way Round

Here's a nice challenge. Using the same principle as above, rewrite \(x^{\underline{n}}\) in terms of perfect powers. Basically, expand the factors ;)