# Encoding Functions with Piecewise

## October 26, 2021 | 8 minutes, 12 seconds

# Introduction

It begun with a question: Can I have a function that encodes multiple functions, via some parameter, or parameters? One which smoothly encodes these functions under such a parameter (or parameters). And the answer is: yes, yes we can.

When I begun looking into this, one of my friends suggested this was close to homotopies (and I suppose it is, I've not looked into them, though). And so I begun my rudimentary exploration of parameterising such problems, in the form of encoding two functions via one parameter. I came up with the following type of equation:

\[ F(x,t)=f(x)+\frac{g(x)-f(x)}{b-a}(t-a)\]

For \(t\in[a,b]\).

What I didn't know then is what I was really doing; rudimentary interpolation. But instead of interpolation over numerical values, interpolation over functions and a parameter (which is identical in nature, realistically).

The question quickly became: how do I encode 3 functions? Well, now, I have the tools to do so. Let \(f:\mathbb{R}\to\mathbb{R}\), \(g:\mathbb{R}\to\mathbb{R}\) and \(h:\mathbb{R}\to\mathbb{R}\) be functions, and (WLOG) \(a<b<c\in\mathbb{R}\) be constants such that:

\[ F(x,t)=\begin{cases} f(x) & t=a \\ g(x) & t=b \\ h(x) & t=c \\ \star & \star \end{cases}\]

Which is an interpolation/APO type problem. Obviously this isn't the only way to interpolate these functions, and we won't look into this more from here. Obviously, you can have more than one variable to help 'transition' between each function, which is, in fact, what will end up happening.

In this post, we'll explore the intuition behind APOs more directly, and less about the interpolation itself. We'll see nice patterns come out to play with and ultimately, figure out how we might approach a problem like this.

# Combining Two Objects

Recall the definition of the single form of an APO (extended form will be given later):

\[ \phi^\star=\mu+[\phi]r\]

For some arbitrary object \(\phi^\star,\mu\) (such as a vector field, function, whatever). Usually when we consider this, we consider the \([\phi]\) object as a decider object, \(0\) when \(\phi\) is defined and anything else when not. And, ironically, we decide what this object is. Instead this time, we'll leave it free. We'll instead fix \(\mu_1,\mu_2\) to be solutions to \(\phi^\star\). By solutions, we mean two instances which fully satisfy \(\phi^\star\) constrained by \([\phi]\).

Hence, we want \(\phi^\star\) satisfied by \(\mu_1,\mu_2\) and so we define:

\[ \begin{align} \phi^\star=\mu_1+[\phi]_1r && (1) \\ \phi^\star=\mu_2+[\phi]_2s && (2) \end{align}\]

Subtracting the two equations; \((1)-(2)\), we have:

\[ \mu_2-\mu_1=[\phi]_2s-[\phi]_1r\]

Since the right hand side is of the form of \([\phi]=\begin{cases} 0 & \phi\textrm{ defined} \\ \star & \phi\textrm{ not defined}\end{cases}\), we let:

\[ [\phi]=\mu_2-\mu_1\]

Since \(\mu_1\) satisfies \(\phi^\star\) also, we see that we can choose to define it as:

\[ \begin{align} \phi^\star&=\mu_1+[\phi]r\\ &=\mu_1+(\mu_2-\mu_1)r \end{align}\]

To confirm this is a desired result, we can write this minimally piecewise as:

\[ \phi^\star =\begin{cases} \mu_1 & r=0 \\ \mu_2 & r=1 \\ \star & \star \end{cases}\]

A transformation can be taken on \(r\) such that the object matches desired values. Notice this also matches the vector equation of a line (linear!) through space in general.

# Combination of 3 - Long Way Round

The next slightly-obvious question is how we might repeating the above process, but with 3 objects. Or, in other words, how we might going about encoding three objects into a single APO. However, if you attempt to solve a system as above, with a single APO, you'll quickly run into an issue where the arbitrary object may not be defined everywhere.

Instead, we'll take the long way round. We'll first take 3 solutions \(\mu_1,\mu_2,\mu_3\), objects satisfying \(\phi^\star\), and encode them into their own APOs, \(\phi_1^\star\) and \(\phi_2^\star\). We'll then take these as solutions to \(\phi^\star\).

To begin, we take \(\mu_1,\mu_2\) to be solutions to \(\phi^\star_1\), and \(\mu_2,\mu_3\) to be solutions to \(\phi_2^\star\). This is only a choice and can be done differently, though, this yields an arguably clean solution. In any case, this means that we get the following:

\[ \phi_1^\star=\mu_1+(\mu_2-\mu_1)r \\ \phi_2^\star=\mu_2+(\mu_3-\mu_2)s \\ \phi^\star=\phi_1^\star+(\phi_2^\star-\phi_1^\star)t\]

For arbitrary objects \(r,s,t\). If we then expand and factor \(\phi^\star\), we get the following:

\[ \phi^\star=\mu_1+(\mu_2-\mu_1)(r+t-rt)+(\mu_3-\mu_2)(st)\]

Provided that \(u=r+t-rt\) and \(v=st\) span the applicable space, we have that

\[ \phi^\star=\mu_1+(\mu_2-\mu_1)u+(\mu_3-\mu_2)v\]

This alludes to an extended form of the APO, which has been touched on previously. It is a natural consequence of how we decide to write the objects themselves. In any case, we have the superposition principle for decider objects, which will be properly introduced soon.

# Superposition and Extended Form

If you've ever done ODEs and/or PDEs you'll be familiar with superposition principles in both. This is similar, though at a far more basic level, and is a natural consequence of how we define it.

If \([\phi]_1\) and \([\phi]_2\) are deciders of an anonymous piecewise object \(\phi^\star\), then the object can be written in several ways; \(\phi^\star=\mu+[\phi]_1r\), \(\phi^\star=\mu+[\phi_2]s\) or \([\phi^\star]=\mu+[\phi_1]u+[\phi_2]v\).

We can prove this. Let \(\phi^\star=\mu+[\phi]r\). Then:

\[ [\phi]r=\begin{cases} 0 & \phi\textrm{ defined} \\ \star & \phi\textrm{ not defined} \end{cases}= \underbrace{\begin{cases} 0 & \phi\textrm{ defined} \\ \star & \phi\textrm{ not defined} \end{cases}}_{[\phi]_1u} + \underbrace{\begin{cases} 0 & \phi\textrm{ defined} \\ \star & \phi\textrm{ not defined} \end{cases}}_{[\phi]_2v}\]

Therefore:

\[ \phi^\star=\mu+[\phi]_1u+[\phi]_2v\]

In general though, we write more generally that:

\[ \phi^\star=\mu+\sum_{n}{[\phi]_nr_n}\]

This is an invariably important fact. Infinitely many ways of writing 0 means you write 0 as much as you want and get away with it. The only restriction happens when you start formulating these things specifically; making it a function and/or giving it a class, whatever. Until then, though, it's everything that satisfies what we want. Hooray.

It also so happens that not only is this fact as obvious as the pigeonhole principle when you *first* see it (if you have \(m\) holes with \(n>m\) pigeons, at least one hole must have more than one pigeon in it), it also leads to a shortcut we'll use to generalise the encoding of objects into a single APO in a fairly simple manner. It's only one *choice* for how we do it; you could devise a different method, but since we've already got one going, we'll continue with that.

# Generalisation

Let \(\mu_1,\mu_2,\dots,\mu_n\) be solutions of the APO \(\phi^\star\). Observing that in our 2-case and 3-case examples we have \(n-1\) pairs of \(\mu_k,\mu_{k+1}\) for \(k\in[1,2,\dots,n-1]\), let \(\mu_k\) and \(\mu_{k+1}\) be solutions to \(\phi^\star\) such that:

\[ \phi^\star=\mu_k+\sum_{i=0}^{n-1}{[\phi]_ir_i}\\ \phi^\star=\mu_{k+1}+\sum_{i=0}^{n-1}{[\phi]_is_i}\]

By the superposition principle. Repeating what we've done before, we see that:

\[ \mu_{k+1}-\mu_k=\sum_{k=0}^{n-1}{[\phi_i](r_i-s_i)}:=[\phi]_k\]

Using \([\phi]_k=\mu_{k+1}-\mu_{k}\) for \(k\in[1,2,\dots,n-1]\) and the fact that \(\mu_1\) is a solution,

\[ \phi^\star=\mu_1+\sum_{k=1}^{n-1}{(\mu_{k+1}-\mu_k)t_k}\]

Remember that we made a specific design decision with this formula. That is, our sum telescopes for when \(t_k=1\) for \(k\in[1,\dots,m]\) for \(m<n\) and \(t_k=0\) afterwards. That is, \(\phi^\star=\mu_{m+1}\), for that particular \(m\). Go on, see for yourself! It can be inductively proven (homework!), but is easiest to see by observation.

# Examples

## Parametric Surface/Plane

Recall the 3-case APO: \(\phi^\star=\mu_1+(\mu_2-\mu_1)u+(\mu_3-\mu_2)v\).

We parameterise this to obtain the parametric surface equation \(r(u,v)=\vec{v_1}+(\vec{v_2}-\vec{v_1})u+(\vec{v_3}-\vec{v_2})v\) (and essentially label and type substitution, \(\mu\) becomes \(\vec{v}\), a vector).

Let \(\vec{v_1}=(1,1,1)\), \(\vec{v_2}=(1,0,0)\) and \(\vec{v_3}=(1,0,1)\). We then have:

\[ r(u,v)=(1,1,1)+(0,-1,-1)u+(0,0,1)v=(1,1-u,1-u+v)\]

Notice that \(r(0,0)=\vec{v_1}\), \(r(1,0)=\vec{v_2}\) and \(r(1,1)=\vec{v_3}\).

## Polynomials

This one is obligatory. We'll go for something a bit different though, still univariate, but a series-inspired approach. Let \(\mu_1,\mu_2,\dots,\mu_{n+1}=1,x,\dots,x^{n}\). Then we have that:

\[ \phi^\star(x)=1+\sum_{k=0}^{n-1}{r_k(x)(x^{k+1}-x^k)}\]

This can be factored such that:

\[ \phi^\star(x)=1+(x-1)\sum_{k=0}^{n-1}{r_k(x)x^k}\]

From here, we'll let \(r_k(x)=1\) for all \(k\). Then, since we know by definition that this results in \(\phi^\star(x)=x^n\), we have:

\[ x^n=1+(x-1)\sum_{k=0}^{n-1}{x^k}\]

And therefore, for \(x\neq 1\):

\[ \frac{x^n-1}{x-1}=\sum_{k=0}^{n-1}{x^k}\]

# Conclusion

This stuff is bizarre. I like it.