Step Overload and Naively Patching Discontinuities

November 05, 2021 | 9 minutes, 57 seconds


Discontinuities don't play nicely in the world of piecewise functions, or namely, the class of piecewise functions that needn't be directly piecewise defined; for example, those given by function composition, which happen to be continuous themselves. We have, indirectly, observed the property that the composition of two continuous functions is also continuous.

But now it's time to take our safety hats off. As we observed with the sign function, we can come up with formulas for discontinuous functions that don't directly employ piecewise notation. The method we'll focus on today is that which relies on limits (or limit abuse, take your pick). Furthermore, we'll finally acknowledge the existence of step functions. We focus on jump discontinuities and removable discontinuities.

The basis of this is essentially defining a function \(f:D\to\mathbb{R}\) for which:

\[ f(x):=\lim_{\varepsilon\to0}{F(x,\abs{\varepsilon})}\]

If \(F\) is continuous in an open ball around \((x,0)\) then we have nothing to worry about (but of course, it's never that easy and you didn't come for continuity). Our goal is simple: Find such a function \(F\) and how we might approach it in general; in particular, we focus on two particular steps:

  1. Creating a 'patch' around a discontinuity for some function \(f(x)\),
  2. Applying the patch on \(f(x)\) globally.

Step with a Hole (Pseudostep?)

'Pseudosignum' (as I call it...) is the function:

\[ \begin{align} \operatorname{psgn}(x) &=\begin{cases} 1 & x>0 \\ -1 & x<0 \end{cases} \\ &=\begin{cases} 1 & \abs{x}=x \\ -1 & \abs{x}=-x \end{cases} \\ &=\begin{cases} 1 & \abs{x}=x \\ -1 & \abs{x}=-x \end{cases} \\ &= -1 + (\abs{x}+x)\frac{2}{2x} \\ &= \frac{\abs{x}}{x} \end{align}\]

A similar derivation can be given for the step function not defined at \(0\):

\[ \begin{align*} H(x) &= \begin{cases} 1 & x>0 \\ 0 & x<0 \end{cases} \\ &= \begin{cases} 1 & \abs{x}=x \\ 0 & \abs{x}=-x \end{cases} \\ &= (\abs{x}+x)\frac{1}{2x} = (\abs{x}+x)\frac{1}{2\abs{x}} \\ &= \frac{\abs{x}+x}{2x} \end{align*}\]

Notice that this is equivalent to \(\frac{1}{x}\max\{x,0\}\). We'll revisit this (pseudostep) function later when we go about patching discontinuities, and noting the effect our particular patch has on this function.

First Steps

Right Sided

Consider the step function:

\[ H(x)=\begin{cases} 1 & x\geq 0 \\ 0 & x<0 \end{cases}\]

Which has a discontinuity at \(x=0\). As per our introduction, our approach is to apply a 'patch' around this discontinuity for \(x\approx0\), which is exactly what we'll do.

We'll do this by noticing two things: Firstly, that \(H(0)=1\) and \(H(0^-)=0\). In other words, for some \(\delta>0\) and \(\delta\approx0\), \(H(0)=1\) and \(H(0-\delta)=0\). We hence encounter the interpolation problem,

\[ H_\delta(x)\approx\begin{cases} 1 & x=0 \\ 0 & x=-\delta \\ \star & \star \end{cases}\]

Luckily for us, an answer to this problem is easy enough: \(H_\delta(x)\approx\frac{x+\delta}{\delta}\). Moreover, we can now represent \(H(x)\) "continuously" as:

\[ H(x)\approx\begin{cases} 1 & x\geq0 \\ H_\delta(x) & -\delta\leq x\leq0 \\ 0 & x\leq-\delta \end{cases}\]

This is in fact just a gluing problem, for which we have a formula pre-derived:

\[ H(x)=H_\delta(\max(\min(x,0),-\delta))\]

Using the fact \(\delta>0\) we can rewrite this as:

\[ H(x)=\max(\min(\frac{x+\delta}{\delta},1),0)\]

Now the next issue is formalising this idea. We know that we have \(\delta>0\) and \(\delta\approx 0\), but this isn't formal. So we'll draw back on the formulation in the introduction, letting \(\delta=\abs{\varepsilon}\) to give us:

\[ \begin{align} H(x)&=\lim_{\varepsilon\to0}{\max(\min(\frac{x+\abs{\varepsilon}}{\abs{\varepsilon}},1),0)}\\ &=\lim_{\varepsilon\to0}{\frac{1}{2}\left(1+\abs{\frac{x+\abs{\varepsilon}}{\abs{\varepsilon}}}-\abs{\frac{x}{\abs{\varepsilon}}}\right)} \\ &=\lim_{\varepsilon\to0}{\frac{1}{2\abs{\varepsilon}}\left(\abs{\varepsilon}+\abs{x+\abs{\varepsilon}}-\abs{x}\right)} \end{align}\]

Left Sided

We'll skip over the introduction and just give the working out.

\[ H(x)=\begin{cases} 1 & x>0 \\ 0 & x\leq 0 \end{cases}\]

We then define \(H_\delta(x)=\frac{x}{\delta}\) (where \(H_\delta(\delta)=1\) and \(H_\delta(0)=0\)), such that:

\[ H(x) = \begin{cases} 1 & x\geq\delta \\ H_\delta(x) & 0\leq x\leq\delta\\ 0 & x\leq0 \end{cases}\]

And hence \(H(x)=H_\delta(\max(\min(x,\delta),0))\).

Rewriting with \(\delta=\abs{\varepsilon}\to0\) and simplifying as before:

\[ H(x)=\lim_{\varepsilon\to0}{\frac{1}{2\abs{\epsilon}}\left(\abs{\varepsilon}+\abs{x}-\abs{x-\abs{\varepsilon}}\right)}\]

Step with a Hole

As it turns out, this method doesn't particularly work as desired with jump discontinuities with no definition at the place of discontinuity. However, depending on the patch, one can formulate a patched version which assigns a value at the place of discontinuity.

Let us consider step function:

\[ H(x)=\begin{cases} 1 & x>0\\ 0 & x<0 \end{cases}\]

And so we formulate the patching function:

\[ H_\delta(x)=\begin{cases} 1 & x=\delta \\ 0 & x=-\delta \end{cases}=\frac{x+\delta}{2\delta}\]

Notice that for \(x=0\) we have \(x=\frac{1}{2}\); but otherwise this formulation is as expected. Rewriting \(H(x)\) in terms of \(H_\delta(x)\) we have:

\[ H(x)=H_\delta(\max(\min(x,\delta),-\delta))\]

Equivalently expressed in terms of \(\abs{\varepsilon}\) and simplified:

\[ H(x)=\lim_{\varepsilon\to0}{\frac{1}{2\abs{\varepsilon}}(\abs{\varepsilon}+\abs{x+\abs{\varepsilon}}-\abs{x-\abs{\varepsilon}})}\]

Looking back at our previous derivations this makes sense, and is more or less the natural choice. It just so happens that the choice \(H(0)=\frac{1}{2}\) is accepted convention, which works out more conveniently for us.

Patchy Notation

Taking the variations of the step function as inspiration, we have general forms for jump discontinuities (being right, left or none "sided"). Let us denote the following, for conciseness:

\[ F(\ell_{a}^{b}(x))=\begin{cases} F(b) & x\geq b \\ F(x) & a\leq x\leq b\\ F(a) & x\leq a \end{cases}\]

Where \(\ell_{a}^{b}(x)\) is the clamping function between \(a\) and \(b\), and so naturally:

  1. \(\ell_{a}^{\infty}(x)=\max(x,a)\)
  2. \(\ell_{-\infty}^{b}(x)=\min(x,b)\)
  3. \(\ell_{a}^{b}(x)=\max(\min(x,b),a)\)

Hence, given a function \(f(x)\) with jump discontinuity at \(x=a\), and \(p,q\in\{0,1\}\) (which denotes "side", \(p=1,q=0\) gives \(\geq\) and \(<\), \(p=0,q=1\) gives \(>\) and \(\leq\), and \(p=q=1\) gives both \(>\) and \(<\)):

\[ f(x)\approx\begin{cases} f(x) & x\geq a+q\cdot\delta \\ f_{\delta}(x) & a-p\cdot\delta\leq x\leq a+q\cdot\delta\\ f(x) & x\leq a-p\cdot\delta \end{cases}\]

Where \(\delta>0\) and \(f_\delta(x)\) satisfies \(f_{\delta}(p\cdot\delta)=f(p\cdot\delta)\) and \(f_{\delta}(q\cdot\delta)=f(q\cdot\delta)\). Hence, by our gluing formula:

\[ f(x)\approx f(\ell_{a+q\cdot\delta}^{\infty}(x))+f_\delta(\ell_{a-p\cdot\delta}^{a+q\cdot\delta}(x))+f(\ell_{-\infty}^{a-p\cdot\delta}(x))-f(a+q\cdot\delta)-f(a-p\cdot\delta)\]

Let \(\delta=\abs{\varepsilon}\to0\):

\[ f(x)=\lim_{\varepsilon\to0}{f(\ell_{a+q\abs{\varepsilon}}^{\infty}(x))+f_\abs{\varepsilon}(\ell_{a-p\abs{\varepsilon}}^{a+q\abs{\varepsilon}}(x))+f(\ell_{-\infty}^{a-p\abs{\varepsilon}}(x))-f(a+q\abs{\varepsilon})-f(a-p\abs{\varepsilon})}\]

General Step

Consider, then, a general step function:

\[ H(x)=\lim_{\varepsilon\to0}{H_{\abs{\varepsilon}}(\ell_{-p\abs{\varepsilon}}^{q\abs{\varepsilon}}(x))}\]

This is the direct result of cancellations of first, third and last two terms in the general patching equation. Then, if \(H_{\abs{\varepsilon}}(x)\) is the linear function connecting \((-p\abs{\varepsilon},0)\) and \((q\abs{\varepsilon},1)\), then we have:

\[ H(x)=\lim_{\varepsilon\to0}{\ell_{0}^{1}\left(\frac{x+p\abs{\varepsilon}}{(p+q)\abs{\varepsilon}}\right)}\]


General Algorithm

The algorithm for this technique is fairly straightforward, but otherwise requires a certain care when using in practice.

  1. Patch discontinuities by interpolating around the discontinuity limits. That is, if \(f(a)\) is defined (if not, use standard piecewise techniques to rewrite the piecewise), and \(\lim_{x\to a^+}{f(x)}\) differs from \(\lim_{x\to a^-}{f(x)}\), then interpolate at \(a+\delta\) or \(a-\delta\), and \(a\).
  2. 'Glue' the patch(es) where necessary.

For multivariate functions, discontinuities occur in open balls, spheres, etc, and therefore one might consider using the vector magnitude method (\((x,y)=(a,b)\iff(x-a)^2+(y-b)^2=0\)) to interpolate and patch. This is considerably more difficult but is conceptually nearly identical.

Floor Function

The piecewise-limit form of the floor function is a bit of a doozy, so we'll walk through the approach.

Let us denote the floor function as follows using general piecewise notation:

\[ \left\lfloor x\right\rfloor=\left\{n,\quad x\in[n,n+1)\mid n\in\mathbb{Z}\right\}\]

Using the fact we can approximate \(x\in[n,n+1)\) with \(x\in[n,n+1-\abs{\varepsilon}]\) for \(\varepsilon\to0\), we note that our infinitely many patches take the form of:

\[ f_n(x)=\begin{cases} n & x=n+1-\abs{\varepsilon}\\ n+1 & x=n \\ \star & \star \end{cases}=n+\frac{x-n-1+\abs{\varepsilon}}{\abs{\varepsilon}}\]

In particular, now we desire continuous pieces on interval \(x\in[n,n+1]\) so that:

\[ F_n(x)=\begin{cases} n & x\leq n+1-\abs{\varepsilon} \\ f_n(x) & x\geq n+1-\abs{\varepsilon} \end{cases}=n+\max(\frac{x-n-1+\abs{\varepsilon}}{\abs{\varepsilon}},0)\]


\[ \left\lfloor x\right\rfloor=\lim_{\varepsilon\to0}{\left\{n+\max(\frac{x-n-1+\abs{\varepsilon}}{\abs{\varepsilon}},0),\quad x\in[n,n+1]\mid n\in\mathbb{Z}\right\}}\]

The most tedious component of this now is applying the gluing formula to it (as its representation is now continuous, but, after some significant work, we have the following formula (note that \(\varepsilon\) has been redefined in terms of \(T\)):

\[ \left\lfloor x\right\rfloor=\lim_{T\to\infty}{\left(\sum_{n=-T}^{T}{\max(T\ell_{-1}^{0}(x-n-1)+1,0)}-T\right)}\]

Or, alternatively (unnecessarily*):

\[ \left\lfloor x\right\rfloor=\lim_{T\to\infty}{\left(\sum_{n=-T}^{T}{\ell_{0}^{\infty}(T\ell_{-1}^{0}(x-n-1)+1)}-T\right)}\]

"It works!"

To briefly show that it works for real numbers, let \(x=p+r\) for \(p\in\mathbb{Z}\) and \(r\in[0,1)\). Then we attempt to show that \(\left\lfloor p+r\right\rfloor=p\).

Suppose that \(\ell_{-1}^{0}(p+r-n-1)=-1\); then by definition, \(p+r-n-1\leq-1\implies n\geq p+r\). Since we want to work over the integers, \(n>p\) and thus we consider \(n\geq p+1\).

Likewise consider when \(\ell_{-1}^{0}(p+r-n-1)=0\); then we have that \(p+r-n-1\geq 0\implies n\leq p+r-1<p\). Hence we consider \(n\leq p-1\).

For the \(n=p\) case, which we missed over the integers, notice that, since \(r\in[0,1)\) and \(T\to\infty\):

\[ \begin{align} \ell_{0}^{\infty}(T\ell_{-1}^{0}(p+r-p-1)+1) &= \ell_{0}^{\infty}(T\ell_{-1}^{0}(r-1)+1) \\ &=\ell_{0}^{\infty}(T(r-1)+1) \\ &=0 \end{align}\]

Hence we can split our summation up into two parts:

\[ \left\lfloor p+r\right\rfloor=\lim_{T\to\infty}{\left(-T+\sum_{-T}^{p-1}{\ell_{0}^{\infty}(1)}+\sum_{p+1}^{T}{\ell_{0}^{\infty}(1-T)}\right)}\]

Which simplifies to:

\[ \left\lfloor p+r\right\rfloor=\lim_{T\to\infty}{\left(-T+(p+T)+\sum_{p+1}^{T}{\underbrace{\ell_{0}^{\infty}(1-T)}_{0}}\right)}=p\]

Final Remarks

At some point, I managed to derive a formula for \([x\in\mathbb{Z}]=\begin{cases} 1 & x\in\mathbb{Z} \\ 0 & x\not\in\mathbb{Z} \end{cases}\). This ended up being the following:

\[ \lim_{T\to\infty}{\sum_{n=-T}^{T}{\left(1-\ell_{-1}^{1}(T\ell_{-\frac{1}{2}}^{\frac{1}{2}}(x-n))^2\right)}}\]

Which, as you can see, is a little bit convoluted, but fairly readable in terms of how it works. After my girlfriend did so, she faithfully let me know I could represent the same thing using:

\[ \lim_{n\to\infty}{\cos(\pi x)^{2n}}\]

The message being that despite having derived the former formula using piecewise, the latter formula is far more elegant in its formulation, but also doesn't obviously invoke piecewise. In fact, it only makes use of smooth functions and no obvious composition that yields a piecewise expression inside of the limit, in the same spirit as the Dirichlet function (which if you haven't seen the limit/cosine form of, you should).

As it stands, the entire method documented in this post essentially forces a piecewise form on to whatever we're trying to do, irrespective of whether that function in fact requires it, or whether it can be written entirely as a composition of elementary functions (which the ones demonstrated here can, by definition). It is nonetheless interesting to see such formulations result from using the piecewise notation itself as a framework, built atop our previous derivations.