Piecewise Formalisms

October 05, 2021 | 15 minutes, 46 seconds

Introduction

Over the last few posts I've briefly covered the idea behind anonymous piecewise objects. Having considered them for a larger duration of time, I'll now introduce formalisms to begin tying some of the related ideas together, alongside a more formal definition for piecewise in general.

The reasons for this post are threefold:

  1. The brief bit of notation forms a small esoteric framework from which functions, relations and others can be formed more 'purely'.
  2. The ideas that come about the thought processes involved make for an interesting post, and, in full generality, help pose some interesting questions.
  3. To brainstorm further ideas into abstracting this framework or applying it, especially in a non-polynomial context. A number of ideas seem to centre around extensions of functions.

Notation for the sake of notation is bad but we build a framework here so I'm guilt-free, for the most part.

Piecewise Objects

A piecewise object can be defined generally as:

\[ \phi=\left\{\phi_i,\quad P_i\mid i\in I\right\}\]

Which is a bit of strange notation. Notably, it doesn't follow the convention of having only one bracket in a conventional piecewise object. We more or less consider this a set of \(\phi_i\), the value component of each piece, and \(P_i\), the set of conditions associated with each piece. \(I\) defines the set \(i\) ranges over; i.e. forms all of our pieces. You could in fact add another set should you choose to.

In practice, we don't really use this notation for the conventional piecewise object. We use the more traditional notation we're used to seeing as shown off in previous posts. With this, though, comes the advantage we can describe functions we already know over continuous intervals, for example:

\[ \phi(x)=\left\{a^2,\quad x=a\mid a\in[-1,1]\right\}\]

In this case, we define \(a=i\in[-1,1]=I\), and \(\forall i\ P_i\iff x=i\). Furthermore, this is an interesting way of writing:

\[ \phi:[-1,1],\quad \phi(x)=x^2\]

The most obvious thing we lose in this however is the ability to write rules separately. Ergo we have to find a singular expression that describes the same thing (and in this instance it becomes obvious that notation like this is esoteric). It does, however, encapsulate all of the conceivable rules one could think of using piecewise notation, and is an area in and of itself worth exploring.

Another example is the absolute value:

\[ \left|x\right|=\begin{cases} x & x\geq 0\\ -x & x\leq 0 \end{cases}\]

In this instance, we consider \(i\in\left\{1,2\right\}\) for example, where \(\phi_1(x)=x\), \(\phi_2(x)=-x\) and \(P_1\iff x\geq 0\) and \(P_2\iff x\leq 0\).

Piecewise objects are not limited to functions. They can be relations (lack of well-definition for example), vector fields, scalar fields (functions), or other things. Though of course, you'd need to be careful how you define them in the relevant contexts.

An interesting idea is \(\phi(x)=\left\{i,\quad x=ij\mid i\in[-1,1],\ j\in[-1,1]\right\}\). Not well-defined everywhere (anywhere?) and definitely not a function. Oh well.

Anonymous Piecewise Objects

Anonymous piecewise objects are piecewise objects defined in general as:

\[ \phi^\star=\begin{cases} \phi & \phi\textrm{ is defined}\\ \star & \phi\textrm{ is not defined} \end{cases}\]

It's that 'simple' but it's an idea that yields univariate interpolation via polynomials for example, and permits a foundational of extending a function without regard for analyticity, well-definition or explicit formulation (for example). Obviously this is a seemingly-absurd idea; you could create anything you wanted and there is no notion of a correct interpretation in general.

This is where we impose our own conditions. In univariate interpolation, we impose two major conditions:

  1. Any function that satisfies our points should be a polynomial, and,
  2. that function should be the minimal degree polynomial satisfying those points.

This uniquely determines a polynomial, which is ultimately proven by the fundamental theorem of algebra.

In any case, we can make this form slightly more digestible, even if less flexible.

To begin, we have the following:

\[ [\phi]=\begin{cases} 0 & \phi\textrm{ is defined}\\ \star & \phi\textrm{ is not defined} \end{cases}\]

This is our decider object. It's special in that it allows us to form an explicit condition which represents when \(\phi\) is or isn't defined. This is also incredibly similar to a characteristic function as I've previously explored.

Furthermore, we let \(\mu\) be a single instance of \(\phi^\star\); that is, it satisfies the definition of \(\phi^\star\). This means it has the definition:

\[ \mu=\begin{cases} \phi & \phi\textrm{ is defined}\\ \star & \phi\textrm{ is not defined} \end{cases}\]

Almost-identically defined to \(\phi^\star\), although the anonymous pieces are not necessarily equal. Furthermore, we can define \(\phi^\star\) as below:

\[ \phi^\star=\begin{cases} \mu & [\phi]=0 \\ \star & [\phi]\neq 0 \end{cases}\]

And taking inspiration from interpolation, we get:

\[ \phi^\star=\mu+[\phi]\cdot r\]

Where \(r\) is also an arbitrary...anything (with respect to what \(\phi^\star\) is. Similarly, we could have defined \(\phi^\star\) as:

\[ \phi^\star=\sum_{i\in I}{a_i\mu_i}+\sum_{j\in J}{[\phi]_j\cdot r_j}\]

Where \(\sum_{i\in I}{a_i}=1\). This is considered to be an extended form, but more symbolically represents the superposition nature of the decider object(s) which are not necessarily identical and their respective associated arbitrary objects. Furthermore, in this instance, we consider a combination of \(\mu_i\) which ultimately add to \(\phi\) when \(\phi\) is defined.

In any case, there are infinitely many ways of adding zero. There are infinitely many ways of defining something that's not defined. When we consider a function over a domain for example, we don't care about anything outside that domain. And you guessed it, when it comes to interpolation, we're considering something that doesn't matter outside of the domain and fitting a polynomial (usually) to it. Interpreting data like this for example is important, and we can do it in infinitely many ways. Although that's not really my area of interest, I'm more interested in the implications of what happens when we do ultimately look for more than what we already have, and interpretations of what we don't.

The aim here is to move away from interpolation and more into what we've already explored. Which combinations of functions, for example, can be used as associated piece conditions to achieve what we've manipulated previously using inequalities. Hint; we can define \(\max\) and \(\min\) in terms of the absolute value function this way. And although these don't technically constitute anonymous piecewise functions, they apply some of the techniques associated with them. From \(\max\) and \(\min\) comes \(\operatorname{clamp}\) which allows us to create our sticking function without using inequalities. Coming soon™.

Uniqueness

We could consider a unique set of conditions to be the conditions which alone determine a problem. We could consider a unique solution to be the set of conditions under which, alone, a problem is solved. Uniqueness is context-dependent and I'm pretty sure it's way above my knowledge grade.

What I'm trying to get at is this. We can only consider something unique if we eliminate any other possible irrelevant conditions or variables or whatnot; ones that exist but hold no relevance to the problem at hand. The analogue here is that \(\phi^\star\) describes as many objects as you want that satisfy a given set of conditions, but outside of those, you can have anything you want. This necessarily means that we cannot perform proofs on such objects without vastly more information.

As a general framework, the above is problematic. We need more information for applications of these ideas. But the more information we have, the less generality we have. The question becomes at what point can we more or less latch on an solidify some parts of this framework? We know that interpolation is one of those, but is incredibly specific. If I'm being honest, I have no idea how to proceed here, and this is one of the many issues of the trains of thought I've had while thinking about this. Obviously, it's down a rabbit hole. But at what point should I stop and say this is probably too much or too esoteric?

The idea behind APOs is to mostly learn new techniques. Whatever we apply to APOs we can apply to other piecewise objects.

Polynomials

Result 1

We want to prove that all polynomials that pass through all of the points in the set \(\left\{(x_i,y_i):i\in I\right\}\) can be written in the form:

\[ P(x)=\mu(x)+r(x)\cdot\prod_{i\in I}{(x-x_i)}\]

Where \(\mu(x)\) is a polynomial that satisfies all the points above (usually the lowest degree polynomial), fixed, and \(r(x)\) is any arbitrary polynomial. This is a consequence of derivation using APOs.

By the factor theorem, \(\prod_{i\in I}{(x-x_i)}\) divides \(P(x)-\mu(x)\). Therefore \(P(x)-\mu(x)=k(x)\prod_{i\in I}{(x-x_i)}\); \(\forall P(x)\exists r(x)=k(x)\ni P(x)=\mu(x)+r(x)\prod_{i\in I}{(x-x_i)}\).

Result 2

Original question: What happens if I differentiate a 'general' polynomial form? Anything interesting? (Original answer was two polynomials can't share the same derivatives at the same points lmao)

All polynomials that share an arbitrary number of points and up to the \(n\)th derivatives at those points have the form:

\[ \phi^\star(x)=\phi(x)+[\phi(x)]^{n+1}r(x)\]

Where \(\phi(x)\) is a given polynomial that satisfies the points and derivatives and \([\phi(x)]\) is a polynomial with zeroes at those points. \(r(x)\) is an arbitrary polynomial.

Proof: We consider all polynomials of the form \(\phi^\star(x)=\phi(x)+[\phi(x)]r(x)\).

We note that the \(n\)th derivative of this polynomial has equation:

\[ \phi^{\star(n)}(x)=\phi^{(n)}(x)+[\phi(x)]^{(n)}r(x)+[\phi(x)]^{(n-1)}(\dots)+\dots+[\phi(x)](\dots)\]

And also that we want the following: \(\phi^{\star(n)}(x)=\phi^{(n)}(x)\) when \([\phi(x)]=0,[\phi(x)]'=0,\dots,[\phi(x)]^{(n-1)}=0\). That is, firstly, the \(n\)th derivative of \(\phi(x)\) at the points is shared by \(\phi^\star(x)\) and all derivatives up to the \(n\)th derivative are shared, which is when each root polynomial is \(0\) by definition.

We therefore have the equation \([\phi(x)]^{(n)}r(x)=0\).

Case 1: From the above equation, we consider when \(r(x)\) vanishes, but not for all \(x\) (which is a trivial solution). \(r(x)\) vanishes iff all conditions above are met; that is \([\phi(x)]=0,[\phi(x)]'=0,\dots,[\phi(x)]^{(n-1)}=0\).

Equivalently, this means that if we consider \([\lambda(x)]\) to have all of the zeroes as above, we can write, for arbitrary polynomials \(p(x)\) and \(s(x)\):

\[ [\lambda(x)]=p(x)\prod_{k=0}^{n-1}[\phi(x)]^{(k)}\]

\[ r(x)=[\lambda(x)]+\left[[\lambda(x)]\right]s(x)\]

Noting furthermore that the decider of \([\lambda(x)]\) shares zeroes with \([\lambda(x)]\) itself, then, for yet another arbitrary polynomial \(q(x)\):

\[ r(x)=q(x)[\lambda(x)]\]

We now consider the derivatives of the respective deciders in \([\lambda(x)]\). Note that for all \(m\) from \(0\) up until \(n-2\), the \(m+1\)the derivative of the decider shares points with the \(m\)th derivative of the decider;

\[ \begin{align} &[\phi(x)]^{(m+1)}=[\phi(x)]^{(m)}+[[\phi(x)]^{(m)}]\alpha_m(x)\\ \implies&[\phi(x)]^{(m+1)}=[\phi(x)]^{(m)}\lambda_m(x) \end{align}\]

Solving the recurrence relation, we get that \([\phi(x)]^{(m+1)}=[\phi(x)]m(x)\) (yet an another arbitrary polynomial). Alternatively, one could have weakly noted that each derivative of the decider shares zeroes with the decider itself, and achieved the same result.

Using the above result, we have that:

\[ r(x)=l(x)[\phi(x)]^n\]

Hence we have that \(\phi^\star(x)=\phi(x)+[\phi(x)]^{n+1}l(x)\), for some arbitrary polynomial \(l(x)\), which can be 'relabelled' as \(r(x)\).

Case 2: We consider now cases when \([\phi(x)]^{(n)}\) vanishes (sometimes).

Suppose \([\phi(x)]^{(n)}\) vanishes at a subset of the arbitrary chosen points, but not others, then \(r(x)\) vanishes at those other points. Then \([\phi(x)]^{(n)}r(x)\) shares zeroes with \([\phi(x)]\) (this can be shown using the same reasoning as in the previous case).

\[ [\phi(x)]^{(n)}r(x)=[\phi(x)]s(x)\]

Since this ODE in \([\phi(x)]\) needs to be solvable for all non-zero polynomials \(r(x),s(x)\), and for general \(n\in\mathbb{Z}^+\), then the only viable solution is for when \([\phi(x)]=0\) which gives the trivial solution. (Also \([\phi(x)]\) is a polynomial)

(I really need to solidify this argument.)

Result 3

Writing polynomials in terms of falling powers; \(x^\underline{n}=x(x-1)\cdots(x-(n-1))\).

This is a special case of newton form polynomial interpolation at points \(x=0,1,\cdots,n+1\) where \(n\) is the degree of the desired polynomial. This can of course, be done via piecewise - demonstrably so, as in one of my previous. However, a technique that does not require algebraic expansion after the fact can be achieved using nesting of APOs. For example:

\[ \begin{align} n^3&\equiv\begin{cases} 0 & n=0\\ 1 & n=1\\ 8 & n=2\\ 27 & n=3\\ \star & \star \end{cases}\\ &=\begin{cases} \begin{cases} 0 & n=0\\ 1 & n=1\\ 8 & n=2\\ \star & \star \end{cases} & n(n-1)(n-2)=0\\ 27 & n=3\\ \star & \star \end{cases}\\ &=\begin{cases}\begin{cases}\begin{cases}0 & n=0\\1 & n=1\\\star & \star\end{cases} & n(n-1)=0\\8 & n=2\\\star & \star\end{cases} & n(n-1)(n-2)=0\\27 & n=3\\\star & \star\end{cases}\\ &=\begin{cases} \begin{cases} \begin{cases} \begin{cases} 0 & n=0\\ \star & \star \end{cases} & n=0\\ 1 & n=1\\ \star & \star \end{cases} & n(n-1)=0\\ 8 & n=2\\ \star & \star \end{cases} & n(n-1)(n-2)=0\\ 27 & n=3\\ \star & \star \end{cases} \end{align}\]

Ultimately, we can make sense of this with a little help from auxillary functions, keeping in mind we are going for lowest degree polynomials:

\[ \begin{align} A(n)&=\begin{cases} 0 & n=0\\ \star&\star \end{cases}:=0\\ B(n)&=\begin{cases} A(n) & n=0 \\ 1 & n=1\\ \star & \star \end{cases}:=A(n)+\begin{cases} 0 & n=0 \\ 1-A(n) & n=1\\\star & \star \end{cases}=n\\ C(n)&=\begin{cases} B(n) & n(n-1)=0 \\ 8 & n=2 \end{cases}:=B(n)+\begin{cases} 0 & n(n-1) =0\\ 8-B(n) & n=2 \end{cases}=n+3n(n-1)\\ n^3&=\begin{cases} C(n) & n(n-1)(n-2) =0 \\ 27 & n=3 \end{cases}:=C(n)+\begin{cases} 0 & n(n-1)(n-2)=0 \\ 27-C(n) & n=3\end{cases} \end{align}\]

Therefore, we have:

\[ n^3=n+3n(n-1)+n(n-1)(n-2)\]

This is in reference to Burkard/Mathologer's recent video, found here.

Homework?

Consider the polynomial \(P(x)=\frac{1}{2}x\left(3x^{3}-2x^{2}-x+2\right)\):

  1. Show that \(P(x)\) satisfies \(P(-1)=1\), \(P(0)=0\), \(P(1)=1\) and hence,
  2. Write \(P(x)\) in the form \(x^2+x(x^2-1)r(x)\) for some \(r(x)\).
  3. Show that \(Q(x)=x^2+x(x^2-1)s(x)\) satisfies the same conditions as \((1)\) for any polynomial \(s(x)\).

As a bonus, write \(P(x)\) as an APO using \(x=0\), \(x=-1\) and \(x=1\) as points of definition for \(P(x)\) and rederive the result from question 3, if you didn't derive it already.

Conclusion

C-c-conclusion? Haven't done one of these in a while!

I hope you take a few things away from this post. The first; piecewise notation isn't easy to describe in full generality. Secondly, this framework is a work in progress and is more algebraic-manipulation oriented as demonstrated by the examples. This doesn't mean my work has to end up this way; in fact, it'll probably end up being irrelevant.

As someone who has been on the piecewise journey a long time now, the learning I've done with piecewise up until this point seems insignificant and esoteric, and this is just the prime example of that. But I feel like I'm finally learning something - and I'm truly excited to take you on a journey with me to create all sorts of stuff. Don't get me wrong; my achievements in the past with squares, batman and others aren't insignificant for me by any means, but they truly felt like I wasn't fully writing something myself. This does, as esoteric as it is, but it truly feels wonderful to say I have my own something there.

So thank you for reading, if you have. I truly, from the bottom of my heart, appreciate it.