@Chen Siyi

November 30, 2020

Note9 Fourier Series

Note9 Fourier SeriesInner Product SpaceVector SpaceInner Product SpaceNorm and OrthogonalityGram-Schmidt OrthonormalizationOrthonormal Basis and CompletenessApproximation of "Vectors"Fourier AnalysisBasic Definitions about "Measure"The Fourier-Euler BasisThe Gibbs PhenomenonConvergence of Fourier Series(Dirichlet’s rule)The Fourier-Cosine BasisThe Fourier-Sine BasisComplex Fourier-Euler BasisAdditional Practice

Let's look at the series methods from a wider view

Inner Product Space

Vector Space

A vector space is a set V , equipped with a rule for addition of any two vectors and for scalar multiplication of a vector by a scalar.

The addition + must satisfy the following axioms:

  1. The set is closed under addition: For any two vectors v and w of , the sum is also in .
  2. Additionis commutative: For all , .
  3. Additionis associative: For all , .
  4. There is an additive identity: There exists such that for all .
  5. Every element has an additive inverse: for every , there exists a vector such that .

The scalar multiplication must satisfy the following axioms:

  1. The set is closed under scalar multiplication: For any vector v in and any scalar , the scalar multiple is also in .
  2. For two scalars , we have for all vectors .
  3. For , we have for all .

And finally, scalar multiplication distributes over addition:

  1. fo rall v,w∈V and all .
  2. for all vectors and all scalars .

Inner Product Space

An inner product on a vector space V is a function

can be or . It must satisfy the following axioms:

  1. Symmetry: for all vectors ;
  2. Linearity in each factor: for all vectors f,g,h ∈ V and for all and all .
  3. Positive Definiteness: for all with if only if .

A vector space V together with a choice of an inner product is called an inner product space.

As the most familiar example, ,

 

Norm and Orthogonality

As soon as you define an inner product space, you can define norm and orthogonality

Norm of a vector is defined as

Which can be interpreted as the length defined under that inner product.

If the norm of , it is said to be normed or normalized.

And there are some fundamental theorems:

  1. Pythagoras’s Theorem:

    Let be an inner product space and , where . Then

  2. Bessel’s Inequality:

    Let be an inner product space and , be an orthonormal system in . Then, for any ,

 

Two vectors and are orthogonal if

A system of vectors is an orthonormal system if all the vectors are normed, and are orthogonal to each other.

 

 

Let's see a really straight-forward way to find orthonormal systems!

As long as initially you have a system, just keep using orthogonal projection...

Gram-Schmidt Orthonormalization

Used to construct an orthonormal system from an existing system.

A Tricky Question:

By the way, does any system returns an orthonormal system of the same size as before?

 

 

Orthonormal Basis and Completeness

A system of vectors is an orthonormal basis of the vector space if it is an orthonormal system, and .

A Tricky Question:

Can we always construct an orthonormal basis of using the Gram-Schmidt Orthonormalization approach from an existing basis of ?

A theorem but not the definition: An inner product space is complete iff every maximal orthonormal system is an orthonormal basis.

Intuitively you can interpret as...

Any vector in can be represented "precisely" (by linear combinitions) using the maximal orthonormal system as long as it is a basis.

Otherwise, even you have a maximal orthonormal system, this system can't be used to represent a vector precisely!

Multiple inner products can be defined for a vector space. For example, let be the vector space:

And a sequence is said to converge uniformly if , converge in the mean if ,and converge in the mean square if .

But not all inner products make sure the inner product space is complete.

For example, is incomplete. is complete. Where,

 

 

Approximation of "Vectors"

We now "say" the least square estimation is our best approximation.

Let be an inner product space, and an orthonormal system in . We seek to approximate a vector using a linear combination of the first elements (or as many as you can...) of the orthonormal system:

Our approximation is the best when the following "error" is minimized:

 

 

Fourier analysis is acting exactly in the same way to find an estimation...

And we choose the inner product space to be . So you will notice, it's generally helpful to represent periodic functions.

Which region do we choose?

Which orthonormal system do we choose? Four different choices. Keep in mind it is affected by the region you choose.

 

 

 

 

A Tricky Question:

Recall The Method of Frobenius, do you find any similarities in the methods?

Fourier Analysis

Basic Definitions about "Measure"

(One of) Our next goal:

Find some (Fourier)series equal to a function almost everywhere within certain region.

The Fourier-Euler Basis

If you choose , then since the below trigonometric polynomials forms an orthonormal basis of , we can choose it as the Fourier-Euler Basis.

 

  1. You can just use the first several terms to form a Fourier-Euler expansion of any by projecting onto the first sevral vectors in this basis.
  2. You can also form a Fourier Series which is a linear combination of "every" vectors in the basis, and equals to almost every where. is gained by projecting onto each vector in this basis.

If you choose , then the Fourier-Euler Basis can be

Exercise:

If possible, uniquely expand , in a Fourier series if (a) the period is , (b) the period is not specified.

 

 

 

 

 

Exercise:

Using the results of the last exercise to prove

 

 

 

 

 

The Gibbs Phenomenon

A Fourier series may not and does not need to converge uniformly. The occurrence of these peaks of at the discontinuities of is known as the Gibbs phenomenon.

Recall "uniform convergence".

Screen Shot 2020-11-30 at 15.57.08

Convergence of Fourier Series(Dirichlet’s rule)

  1. On any sub-interval with , on which is continuous, the Fourier series converges uniformly towards .

  2. At any point , we have the pointwise limit: when

Actually the second point needs certain conditions () to hold, but we don't discuss them in VV286.

 

The Fourier-Cosine Basis

Cosine functions are even.

If you choose , then the Fourier-Cosine Basis can be

The Fourier-Sine Basis

Since functions are odd.

If you choose , then the Fourier-Sine Basis can be

A Tricky Question:

For an arbitrary , if you expand it using the the Fourier-Cosine Basis and the Fourier-Sine Basis separately with terms, and obtain series and , would and becomes equal almost every where?

It's interesting to see knowledge links to each other, does this remind you of any fundamental theorems you have seen before?

Exercise:

Explain why any function is a sum of an even function and an odd function in just one way.

Explain why any odd function would has its Fourier cosine series be 0.

 

 

 

 

 

A Tricky Question:

Can the following question be solved?

Expand , , in a Fourier cosine series.

Exercise:

Expand , , in a half range (a) sine series, (b) cosine series.

Hint: Can you first "extand" f a little bit?

 

 

 

 

 

Just a reminder before the last part, is defined to containning complex valued functions of a real variable, not the same as complex functions.

Complex Fourier-Euler Basis

If you choose , then the following would also be a basis of , simply because you can change basis with the Fourier-Euler Basis which we discussed above.

So you can also do orthogonal projection onto each vector of this basis to find an approximation. No matter whether is real valued or not, since just the might be complex numbers.

 

Additional Practice

Exercise:

Find a Fourier series for , , where ... .

Hint: Can you first "extand" f a little bit?

 

 

 

 

 

Exercise:

Prove that

Hint:

Use the results from the previous exercise.