Many Functions - But For Real

January 12, 2022 | 14 minutes, 7 seconds

Introduction

It's been a little while since my last post, and honestly, not much has happened in the interim. I've begun working on a personal document for piecewise notes and the like, and that's a long term hobby project.

I digress. There are several interesting ideas I want to discuss in this post, and they have to do with how we approach a combination of interpolation and piecewise constructed functions. We take a similar to appraoch to what we've done in the past, with our logic and combinations of coditions. This time, however, we do so using continuous functions.

Logical conditions and combinations

Combination of logical AND statements

We briefly covered this topic back in our previous post covering modulo operators in the context of piecewise, but it's a good idea to bring up here again in full generality.

Namely, consider the condition with functions \((f_1=0)\land (f_2=0)\land \dots\land (f_n=0)\). We can consider an equivalent condition in vector form: \((f_1,f_2,\dots,f_n)=\mathbf{0}\).

Let us then consider a set of functions, \(p_k(f)\), \(k\in\{1,\dots,n\}\), such that:

  1. \(p_k(f)=0\iff f=0\),
  2. \(\forall f,\ p_k(f)\geq 0\).

Examples of such functions include \(f^n\) and \(\abs{f}\). By our construction, we can consider our vector equation in the following form:

\[ (p_1(f_1),p_2(f_2),\dots,p_n(f_n))=\mathbf{0}\]

Taking the norm of both sides gives us:

\[ \abs{p_1(f_1)}+\abs{p_2(f_2)}+\dots+\abs{p_n(f_n)}=0\]

Since we have \(p_k(f)\geq 0\), this can rewritten nicely as:

\[ \sum_{k=1}^{n}{p_k}(f_k)=0\]

To confirm that this condition matches our original, we choose some \(m\in\{1,2,\dots,n\}\) such that:

\[ p_m(f_m)=-\sum_{\substack{k=1\\k\neq m}}^{n}{p_k}(f_k)\implies p_m(f_m)\leq 0\]

But since \(p_m(f_m)\geq0\), the only possibility is that \(p_m(f_m)=0\) for all \(m\). We therefore have essentially 'combined' a limit set of conditions. This may potentially raise the question for you about how this applies to say, a condition \(x\geq 0\); this is something we'll reach later in the post. In the end, we have the following identity:

\[ \sum_{k=1}^{n}{p_k}(f_k)=0\iff\bigwedge_{k=1}^{n}{(f_k=0)}\]

Combination of logical OR statements

Previously, we've touched on how to do this: via the null factor law, with functions \(f_1\cdot f_2\cdot\ \dots\ \cdot f_n=0\). We'll instead opt to do this via a technique which connects interpolation with the null factor law.

Let us consider the following APO:

\[ f=\begin{cases} 0 & f_1 = 0 \\ 0 & f_2 = 0 \\ \vdots & \vdots \\ 0 & f_n = 0 \\ \star & \star \end{cases}\]

This is a fairly straightforward interpolation problem. Without loss of generality, let \(g\) be some nonzero function such that \(f=g\displaystyle\prod_{k=1}^{n}{f_k}\). By definition, therefore, we have that:

\[ f=0 \implies (g=0)\lor(f_1=0)\lor(f_2=0)\lor\dots\lor(f_n=0)\]

Since \(g\) is nonzero, this reduces to \(f=0\implies\bigvee_{k=1}^{n}{(f_1=0)}\). We therefore establish the identity:

\[ \prod_{k=1}^{n}{f_k}=0\iff \bigvee_{k=1}^{n}{(f_k=0)}\]

Remarks

In essence, we see that addition is 'nearly' an analogue to the logical 'and', and multiplication as an analogue to the logical 'or'. Furthermore, the null factor law represents the zeroes of a function with a higher dimensional graph than the functions which compose it.

I also conjecture, that there is no generalised and continuous logical not analogue for \(f_k=0\), directly. This can be seen here: \(\lnot(f_k=0)\iff (f_k>0)\lor(f_k<0)\iff \abs{f_k}>0\). This would be continuous in general only if \(f_k\) was a strictly nonzero function (in which case \(f_k>0\iff g_k\geq 0\)).

Rewriting Common Conditions

Intervals with \(\max\) and \(\min\)

\(\max\) and \(\min\) are two incredibly useful piecewise functions which end up popping up everywhere in continuous contexts.

In a piecewise context, suppose we want a function that has a condition \(a\leq t\leq b\) for some constants \(a,b\in\mathbb{R}\), and we want to write this condition in the form that can be combined with other conditions as above. We note that:

\[ x\geq a \land x\leq b\]

We can rewrite each condition, respecitively in terms of \(\max\):

\[ \begin{align*} \max(x,a) &=x \implies \max(a-x,0) =0 \\ \max(x,b) &=b \implies \max(x-b,0) =0 \end{align*}\]

And using our 'and' combination analogue from earlier,

\[ \abs{\max(a-x,0)}+\abs{\max(x-b,0)}=0\]

Since each \(\max\) term is always non-negative, we have:

\[ \max(a-x,0)+\max(x-b,0)=0\]

After some simplification, this becomes \(\max\{a-x,x-b,0\}=0\). This can similarly be written \(\max(\abs{x-\frac{a+b}{2}}+\frac{a-b}{2},0)=0\).

Alternatively, one might make use of the clamping function; \(\ell_{a}^{b}(x)=x\) gives \(\ell_{a-x}^{b-x}(0)=0\) (note, this is not a strictly nonnegative function). In other words,

\[ \frac{1}{2}(a+b-2x+\abs{x-a}-\abs{x-b})=0\]

Hypercubes

Recall that the explicit equation of an \(n\)-dimensional unit hypercube centred at \(\mathbf{0}\) is given by \(\max\{\abs{x_1},\abs{x_2},\dots,\abs{x_n}\}=0\). Notably, suppose that we define the function:

\[ f(\vec{x})=\max_{i}\{x_i\}\]

Suppose that we're interested in two regions in particular: the interior and exterior of the hypercube. Intuition says that \(f(\vec{x})\leq 1\) gives us the interior; i.e. \(\forall i,\ -1\leq x_i\leq 1\). On the contrary, \(f(\vec{x})\geq 1\) is the exterior of the hypercube. More formally, \(f(\vec{x})\leq 1\) is a bounded set.

For example, instead of having two separate conditions for say, \(a\leq x\leq b\) and \(c\leq y\leq d\), one would use a transformation of the interior of the unit hypercube and go from there.

Composing a function

My previous attempts at fully generalising composing relations which encompass multiple equations bounded by conditions of some kind usually begun by defining \(y=\dots\) and going from there. This is a naive approach, and was noted at the time to be so.

It is therefore reasonable to expect an approach which encompasses starting with a not well-defined function and manipulating from there. Unfortunately, with interpolation techniques, this does not work - the technique requires that the function be well defined.

We make use of the same technique as our 'or' analogue, therefore, so that we have a set of functions and conditions \(f_k=0\) and \(C_k=0\), using our functions as additional conditions:

\[ f=\begin{cases} 0 & f_1=0 & C_1=0 \\ 0 & f_2=0 & C_2=0 \\ \vdots & \vdots & \vdots \\ 0 & f_n=0 & C_n=0 \\ \star & \star & \star \end{cases}\]

As opposed to the following, which could not be manipulated:

\[ f=\begin{cases} f_1 & C_1=0 \\ f_2 & C_2=0 \\ \vdots & \vdots \\ f_n & C_n=0 \\ \star & \star \end{cases}\]

In any case, let \(p\) be any arbitrary function as given in our 'and' analogue (noting each instance of \(p\) may not be the same, for simplicity), so that we may rewrite our function as:

\[ \begin{align*} f &=\begin{cases} 0 & p(f_1)+p(C_1)=0 \\ 0 & p(f_2)+p(C_2)=0 \\ \vdots & \vdots & \vdots \\ 0 & p(f_n)+p(C_n)=0 \\ \star & \star & \star \end{cases} \\ &= g\cdot\prod_{k=1}^{n}{\left(p(f_k)+p(C_k)\right)} \end{align*}\]

For nonzero function \(g\). From there, we know that \(f=0\) iff each instance is 0; i.e. our functions, and so we have the following implicit equation:

\[ g\cdot\prod_{k=1}^{n}{\left(p(f_k)+p(C_k)\right)}=0\]

Examples

\(y=\abs{x}\)

We can rewrite \(y=\abs{x}\) as the following:

\[ \begin{align*} f(x,y) &=\begin{cases} 0 & y-x=0 & x\geq 0 \\ 0 & y+x=0 & x\leq 0 \end{cases} \\ &=\begin{cases} 0 & y-x=0 & \max(-x,0)=0 \\ 0 & y+x=0 & \max(x,0)=0 \end{cases} \\ &=\begin{cases} 0 & \abs{y-x}+\max(-x,0)=0 \\ 0 & \abs{y+x}+\max(-x,0)=0 \end{cases} \\ &=\left(\abs{y-x}+\max(-x,0)\right)\left(\abs{y+x}+\max(x,0)\right)g(x,y) \end{align*}\]

And so then we have the equation:

\[ \left(\abs{y-x}+\max(-x,0)\right)\left(\abs{y+x}+\max(x,0)\right)g(x,y)=0\]

Obviously, there are some simplifications we can make here, but they're not at all obvious. (Todo! Find identities!)

Unit right-angled triangles

We can compose a right-angled triangle bounded by equations \(x=0\) for \(0\leq y\leq 1\), \(y=1-x\) for \(0\leq x\leq 1\) and \(y=0\) for \(0\leq x\leq 1\).

We therefore have:

\[ \begin{align*} f(x,y) &= \begin{cases} 0 & x=0 & 0\leq y\leq 1 \\ 0 & y=0 & 0\leq x\leq 1 \\ 0 & x+y-1=0 & 0\leq x\leq 1 \end{cases} \\ &= \begin{cases} 0 & x=0 & \max(\abs{y-\frac{1}{2}}-\frac{1}{2},0)=0 \\ 0 & y=0 & \max(\abs{x-\frac{1}{2}}-\frac{1}{2},0)=0 \\ 0 & x+y-1=0 & \max(\abs{x-\frac{1}{2}}-\frac{1}{2},0)=0 \end{cases} \\ &= \left(\abs{x}+\max(\abs{y-\frac{1}{2}}-\frac{1}{2},0)\right)\left(\abs{y}+\max(\abs{x-\frac{1}{2}}-\frac{1}{2},0)\right)\left(\abs{x+y-1}+\max(\abs{x-\frac{1}{2}}-\frac{1}{2},0)\right)&=0 \end{align*}\]

Closed Polygons

(Hi Matt Parker "no equation for a triangle" smh)

Honestly not even going to bother deriving the equation here. Here, have it you curious scoundrel, you.

Yes, there is a general polygon formula. It is given as follows, where \(u_x\) is the x-coordinate of the point \(u\), \(u_y\) is the y-coordinate of the point \(u\). Let us have a set of points, \(p_1,p_2,\dots,p_n\):

\[ \begin{align*} T_{u,v}(x,y) &= (v_x-u_x)y-(v_y-u_y)(x-u_x)-(v_x-u_x)u_y \\ C_{u,v}(x,y) &= \max\left(\abs{\frac{2x-u_x-v_x}{v_x-u_x}},\abs{\frac{2y-u_y-v_y}{v_y-u_y}}\right) \\ \end{align*}\]

\[ \left(\abs{T_{p_{n},p_{1}}(x,y)}+\max(C_{p_{n},p_{1}}(x,y)-1,0)\right)\prod_{k=1}^{n-1}{\left(\abs{T_{p_{k},p_{k+1}}(x,y)}+\max(C_{p_{k},p_{k+1}}(x,y)-1,0)\right)} =0\]

Good day, disastrous human. Also consider these equations can all be written explicitly using elementary function composition. I wouldn't even bother honestly, sounds like pain.

Advantages and Pitfalls

Interestingly, most major available graphics packages (MATLAB/Mathematica/GeoGebra) fail to accurately graph the intersection of the multivariable functions with \(0\), which is essentially what we do here. If you graph a created 2d function in 3d, you'll find what we have done is composed a function which intersects \(0\) for our given conditions!

In any case, the lack of ability to graph this is problematic. With that being said, approximations are still fine, but fail to be accurate very quickly.

We have also established and made obvious a pitfall of this method; its lack of simplicity in its final composition. We have created an effectively difficult-to-simplify function which is very unappealing to pretty much everyone. Ah well.

On the other hand, we have created a system that does not depend on the continuity of separate functions (mainly the continuity of functions themselves if you opt for that!). The method to produce these functions is systematic and fairly intuitive to understand, especially with access to a graphing system.

Mainly, the idea of this method is to create some general, continuous, still-not-smooth and well-defined, not-hacky (\(\sqrt{\frac{\abs{x}}{x}}\) is evil pseudosign undefined abuse) function, that is systematic and fairly straightforward to compose. It definitely succeeds here.

Final remarks on closed shapes

We finally reached the point where we've come full circle. Note that, in the past, we have established a formula which we didn't really prove or justify intuition nicely for that gives an equation of a closed shape:

\[ \max(f_1,f_2,\dots,f_n)=0\]

This formula is the formula for a closed chape, where \(f_1, f_2, \dots, f_n\) are the functions which form the boundaries under the following conditions:

\[ (f_1\leq 0)\land (f_2\leq 0)\land \dots\land (f_n\leq 0)\]

That is to say that, for each boundary equation \(f_k\), we have the following requirements:

\[ \forall m\in\{1,2,\dots,k-1,k+1,\dots,n\},\ f_k=0,\ f_{m}\leq 0\]

We can therefore construct a function, and rewrite our conditions to fit a form we need (i.e. in terms of the max function):

\[ \begin{align*} F &=\begin{cases} 0 & f_1=0 & f_2\leq 0 & f_3\leq 0 & \dots & f_n\leq 0 \\ 0 & f_2=0 & f_1\leq 0 & f_3\leq 0 & \dots & f_n\leq 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & f_n=0 & f_1\leq 0 & f_2\leq 0 & \dots & f_{n-1}\leq 0 \\ \star & \star & \star & \star & \dots & \star \end{cases} \\ &=\begin{cases} f_1 & f_2\leq f_1 & f_3\leq f_1 & \dots & f_n\leq f_1 \\ f_2 & f_1\leq f_2 & f_3\leq f_2 & \dots & f_n\leq f_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ f_n & f_1\leq f_n & f_2\leq f_n & \dots & f_{n-1}\leq f_n \\ \star & \star & \star & \dots & \star \end{cases}\\ &= \begin{cases} f_1 & \max(f_1,f_2,f_3,\dots,f_n)=f_1 \\ f_2 & \max(f_1,f_2,f_3,\dots,f_n)=f_2 \\ \vdots & \vdots \\ f_n & \max(f_1,f_2,f_3,\dots,f_n)=f_n \\ \star & \star \end{cases} \\ &= \max(f_1,f_2,\dots,f_n)=0 \end{align*}\]

In essence, we've rewritten our piece values in terms of our first piece condition in each piece. We then recognised that the equation is in the form of a standard max function, and adjusted for that.

Note that this equation is the equation of the boundary! For the equation of the filled shape, we take our conditions as before on the boundaries and construct our function in a similar manner to before:

\[ \begin{align*} G &= \begin{cases} 0 & f_1\leq 0 & f_2\leq 0 & \dots & f_n\leq 0 \\ \star & \star & \star & \dots & \star \end{cases} \\ &= \begin{cases} 0 & \max(f_1,f_2,\dots,f_n,0)=0 \\ \star & \star \end{cases} \end{align*}\]

This gives us a similar result to before, with a slight difference:

\[ G=\max(f_1,f_2,\dots,f_n,0)=0\]

Alternatively, \(G=\max(F,0)=0\implies\abs{F}=-F\).

Examples

Right-angled triangle

Consider once again a triangle with 3 sides \(x=0\), \(y=0\) and \(y=1-x\). Since we need \(x\geq 0\), we have \(-x\leq 0\) and so \(f_1(x,y)=-x\). Likewise with \(y\geq 0\) we have \(f_2(x,y)=-y\). Finally, we require \(y\leq 1-x\) and so \(f_3(x,y)=x+y-1\). We therefore achieve the following:

Unfilled shape: \(\max(-x,-y,x+y-1)=0\). Filled shape: \(\max(-x,-y,x+y-1,0)=0\).

Simple triangle

Let's consider a triangle bounded by sides \(y=-1\), \(y=1-2x\) and \(y=2x+1\). We note:

\[ \begin{align*} y\geq -1 &\implies f_1(x,y)=-1-y \\ y\leq 1-2x &\implies f_2(x,y)=y+2x-1 \\ y\leq 2x+1 &\implies f_3(x,y)=y-2x-1 \end{align*}\]

The unfilled equation is therefore \(\max(-y-1,y+2x-1,y-2x-1)=0\). There's a lot we can do to clean this up:

\[ \begin{align*} &\max(-y-1,y+2x-1,y-2x-1) &= 0 \\ \implies& \max(-y,y+2x,y-2x) &=1 \\ \implies & \max(\max(y+2x,y-2x),-y) &= 1 \\ \implies & \max(y+\abs{2x},-y) &= 1 \\ \implies & \abs{x}+\max(y+\abs{x},-y-\abs{x}) &= 1 \\ \implies & \abs{x}+\abs{y+\abs{x}} &= 1 \\ \end{align*}\]

Magic :)

Edit (30/01/2022): This approach works for convex shapes only. With this being said, the typical approach as with concave shapes in general is to split concave shapes into their convex parts, and then treat each shape as open (to then combine with the first approach as before).