(…as we were), does anyone have an intuition — or can anyone *point me to* an intuition — for why Fourier series would be so much more powerful than power series? Intuitively, I would think that very-high-order polynomials would buy you the power to represent very spiky functions, functions with discontinuities at a point (e.g., f(x) = -1 for x less than 0, f(x) = 1 for x >= 0), etc. Yet the functions that can be represented by power series are very smooth (“analytic“), whereas the functions representable by Fourier series can be very spiky indeed.

The intuition may be in Körner, but I’ve not found it.

This could lead me down a generalization path, namely: develop a hierarchy of series representations, with representations higher on the hierarchy being those that can represent all the functions that those lower on the hierarchy can represent, plus others. In this way you’d get a total ordering of the set of series representations. I don’t know if this is even possible; maybe there are some series representations that intersect with, but are not sub- or supersets of, other series representations. I don’t think I’ve ever read a book that treated series representations generally; it’s always been either Fourier or power series, but rarely both, and never any others. Surely these books exist; I just don’t know them.

And now, back to reading Hawkins.

### Like this:

Like Loading...

My first thought was the Runge phenomenon, but then I read on wikipedia that there is a similar problem in Fourier analysis (Gibbs phenomenon).

I googled your question and found this short discussion (begun by a question from a guy who really doesn’t know what’s up) about Taylor series

http://math.stackexchange.com/questions/47430/is-fourier-series-an-inverse-of-taylor-series

The key takeaways for me (as potential answers to your question) were

(1) Fourier series basis functions are orthogonal unlike those of Taylor series

(2) High-order polynomials fit local phenomena really well but are inherently not periodic and so do not capture “global” (=periodic, in my reading?) behavior.

LikeLike

Super-interesting, Mary Beth; thank you! The idea that constructing the series from an integral makes the series representation in some sense global, whereas constructing it from a derivative makes it inherently more local … well, that’s just fascinating. I am fascinated.

I wonder how far you can take that. Can one construct a series for the Dirichlet function (1 at rational numbers, 0 everywhere else) using an appropriate integral transform? The Wiki kind of sort of says “yes”, but I may just be misreading it.

Also, mustn’t there be some way to construct something close to a power series, but in which the summands are mutually orthogonal polynomials? … I found the Wikipedia entry about the classical orthogonal polynomials, but it’s not clear that they’re used to construct representations of other functions.

This is all just fascinating. Gotta keep reading.

You’re the best. Thanks for writing.

LikeLike

I cannot comment on the broader mathematical questions, but concerning your last query (“Are orthogonal polynomials used to construct representations of other functions?”)—yes, absolutely, all the time. A very broad class of applications is solving linear partial differential equations that possess a certain symmetry. For example, if your PDE has spherical symmetry, its eigenfunctions will have an angular dependence given by the spherical harmonics, which are related to the associated Legendre polynomials. An expansion in a Fourier series decomposes a function into the fundamental modes of a string (or, in higher dimension, those of a rectangular box); an expansion in spherical harmonics decomposes it into the fundamental modes of a sphere.

Here’s a neat practical application: say you’re interested in the electric potential due to an

arbitrarily shapedcharged object at some distance r. Which features of the shape are important? How to construct a sequence of increasingly accurate approximations if r is large relative to the object’s dimensions? You decompose the charge density into (the orthogonal family of) Legendre polynomials, {Pn}. You’ll find that the contribution to the potential due to the P0 term in the charge density falls off like 1/r, the contribution due to P_1 like 1/r^2, etc.In many numerical applications, storing a function as its coefficients in some orthogonal series expansion will give you vastly better precision per byte of storage space than storing it as, say, values at a rectangular grid. Which family of orthogonal functions to choose depends on what approximate symmetry your function has. If it’s rectangular, you use a Fourier series; if it’s spherical, you use spherical harmonics; if it’s cylindrical you use Bessel functions, etc. This trick is widely used in computational materials science, for storing atomic potentials.

LikeLike

As you may have already figured out, the answer to “Why are Fourier Series more powerful than Power Series?” is “They aren’t.”. The Weierstrass Approximation theorem states that we can build a sequence of polynomials to uniformly approximate any continuous function, dropping the uniformity of the approximation gives us the usual L^2 set. You’re confusing Taylor Series with Power Series in general.

Abstract Fourier Series should be covered in any modern Real Analysis book. The chapter on Functional Analysis in “Mathematics: Its Content, Methods and Meaning” is good, at least to my memory.

LikeLike

‘The Weierstrass Approximation theorem states that we can build a sequence of polynomials to uniformly approximate any continuous function’

But don’t Fourier series converge for a broader class of functions than just piecewise-continuous ones?

LikeLike