The M and P functions that I mentioned in “Higher Order Operators and Functions” are a lot more interesting that I expected.
First, a recap on the definition:M(f(x)) = f(x) + + … (general antiderivatives here)
P(f(x)) = f(x) + f'(x) + f”(x) + f”'(x) +… Originally I guessed that there is no function such that M(f(x)) and P(f(x)) converge, but I have found out how wrong I was. For the simple function f(x) = x, which is the Taylor series expansion for except the constant term. P(x) = x + 0 + 0 … = x So x is doubly-convergent. For a general monomial a x^b we multiply the monomial by a term k so that it has the form: which matches the taylor series form.
Which is convergent. And we showed before that all polynomials are P convergent. Integration is distributive over addition, so we can add the above expression for each term of a polynomial together to get a combined M(p(x)) for any polynomial. All polynomials are doubly convergent then. Because integration and differentiation are distributive over addition, the M and P functions are also distributive (modulo some technical difficulties that may occur re: convergence). An example polynomial:
Rationals: f(x) = 1 / x is P convergent, but the value and convergence status of M(1 / x) is unknown.