Framework for Spectral Methods (Chebyshev Nodes)

June 20, 2017 | Autor: M. Samson | Categoria: Spectral Methods, Orthogonal polynomials, Pseudospectral Methods, Gaussian quadrature
Share Embed


Descrição do Produto

MAT 357

Spectral Methods

9 November 2015

Problem and Standardization The problem is, under assumption of an underlying function F (ξ) which can be sampled ηi = F (ξi ) at any point a ≤ ξ ≤ b, some analysis should be performed (e.g., differentiation, integration). A linear transformation 1(ξ − a) + (−1)(b − ξ) 2ξ − a − b a(1 − x) + b[x − (−1)] (b − a)x + a + b = ⇐⇒ ξ = = b−a b−a 1 − (−1) 2   (b − a)x + a + b maps ξ ↔ x, −1 ≤ x ≤ 1, and gives a function f (x) = F = F (ξ). 2 x=

Chebyshev Polynomials and Nodes Spectral methods on f (x) rely on the ability to sample yi = f (xi ) over the spectral nodes. Let’s focus on the Chebyshev nodes. The n Chebyshev nodes of degree n are the roots of the Chebyshev polynomial of degree n, Tn (x), which can be defined as Tn (x) = cos(n arccos x), which can also be defined recursively by T0 (x) ≡ cos 0 = 1,

T1 (x) = cos(arccos x) = x,

2xTn (x) = Tn+1 (x) + Tn−1 (x),

(1)

which follows from, letting θ = arccos x =⇒ Tn (x) = cos nθ,

(2)

if m < n, Tn+m (x) + Tn−m (x) = cos[(n + m)θ] + cos[(n − m)θ] = cos nθ cos mθ − sin nθ sin mθ + cos nθ cos mθ + sin nθ sin mθ = 2 cos nθ cos mθ = 2Tn (x)Tm (x),

(3)

with m = 1. Thus, the n Chebyshev nodes of degree n satisfy Tn (x) = 0 = cos nθ =⇒ θ = arccos x =

2k + 1 π, where k is an integer 2n

so, for i = 1, . . . , n, the nodes are     2i − 1 2i − 1 π = − cos π xi = cos π + 2n 2n so that −1 < x1 < · · · < xn < 1.

Michael Daniel Samson

Chebyshev Polynomials and Nodes

Page 1 of 4

MAT 357

Spectral Methods

9 November 2015

Polynomial Interpolation and Spectral Coefficients If f is sampled over the n Chebyshev nodes of degree n, yi = f (xi ), f can be interpolated over [−1, 1], by using the Lagrange polynomials over the nodes: ( n n Y X 1 if i = j, x − xj yi Li (x) where Li (x) = y= such that Li (xj ) = δij = x − x 0 if i 6= j. i j j=1 i=1 j6=i

The Lagrange basis polynomials Li (x) form a basis of the function space of polynomials of degree n − 1, Pn−1 , which is an n-dimensional vector space. The Chebyshev polynomials of degree up to n − 1 also form a basis of Pn−1 , thus it is always possible to find coefficients yˆi such that y=

n X

yi Li (x) =

n−1 X

yˆi Ti (x).

i=0

i=1

yˆ0 , . . . , yˆn−1 are the spectral coefficients of the interpolating polynomial.

Orthogonal Polynomials and Gauss Quadrature The reason why Chebyshev polynomials would be preferred is that Chebyshev polynomials (or the Legendre polynomials, or more generally Jacobi polynomials) are orthogonal with respect to integration with respect to a weight—that is, for some constant kn , Z 1 Tn (x)Tm (x) √ dx = δmn kn . 1 − x2 −1 This can be shown, using (2), such that x = cos θ =⇒ dx = − sin θdθ, √ 1 − x2 = sin θ and, by (3) Z Z Z 1 1 0 Tn (x)Tm (x) 1 0 √ cos[(n + m)θ]dθ − cos[(n − m)θ]dθ dx = − 2 π 2 π 1 − x2 −1   θ=π  1 sin[(n + m)θ] sin[(n − m)θ]   + = 0 if n > m,   n+m n−m 2  θ=0 θ=π = 1 sin(2nθ) π  +θ if n = m > 0, =   2 2n 2  θ=0   θ=π [θ]θ=0 = π if n = m = 0. So

This means that, if Li (x) =

n−1 X

Z

1

−1

δmn (1 + δn0 )π Tn (x)Tm (x) √ dx = . 2 2 1−x

ℓki Tk (x) such that y =

k=0

n X

yi Li (x) =

n X

yi

i=1

i=1

then yˆk =

n X

"n−1 X

(5)

#

ℓki Tk (x) =

k=0

yi ℓki ,

(4)

n−1 X k=0

"

n X i=1

#

yi ℓki Tk (x),

(6)

i=1

Michael Daniel Samson

Chebyshev Polynomials and Nodes

Page 2 of 4

MAT 357

Spectral Methods

9 November 2015

so it suffices to find the coefficients ℓki . This can be done using the orthogonality property of Chebyshev polynomials (5): Z

1 −1

n−1 X Z 1 Tj (x)Tk (x) ℓki (1 + δk0 )π Li (x)Tk (x) √ √ ℓji dx = dx = . 2 2 2 1−x 1−x −1 j=0

The problem remaining is determining the value of the integral. The choice of the Chebyshev nodes allows the use of Gauss quadrature1 : for any polynomial p of degree at most 2n − 1: Z So Z

1 −1

n

1 −1

X π p(x)dx √ p(xi )ωi , where ωi = . = 2 n 1−x j=1

n

Li (x)Tk (x) ℓki (1 + δk0 )π 2Tk (xi ) Tk (xi )π πX √ = =⇒ ℓki = . Li (xj )Tk (xj ) = dx = 2 n j=1 n 2 (1 + δk0 )n 1−x

Transformations via Matrix Computations If the data is in a column vector ~y = (y1 , y2 , . . . , yn )T , the spectral coefficients yˆ = (ˆ y0 , yˆ1 , . . . , yˆn−1 )T can be determined by (6) through a matrix multiplication yˆ = L~y :      yˆ0 y1 ℓ01 ℓ02 ··· ℓ0n      ℓ11 ℓ12 ··· ℓ1n    y2   yˆ1   = .      .. . .. . . .. ..   ..   ..   . . yˆn−1 yn ℓ(n−1)1 ℓ(n−1)2 · · · ℓ(n−1)n The inverse transformation is more straightforward, given yi =

n−1 P

yˆk Tk (xi ), thus performed by the

k=0

matrix operation ~y = Tˆ y: 

T0 (x1 )  T0 (x2 )   ..  .

T0 (xn )

T1 (x1 ) T1 (x2 ) .. .

··· ··· .. .

T1 (xn )

···

 Tn−1 (x1 )  Tn−1 (x2 )    ..  .

Tn−1 (xn )

yˆ0 yˆ1 .. . yˆn−1

 y1   y2      =  ..  ,  . 



yn

where the matrix T (and L = 2T/n except for the first column, which are all 1/n) can be generated recursively by (1) on the first two columns (1, 1, . . . , 1)T and (x1 , x2 , . . . , xn )T . These two operations perform the transformation from physical (data) space to spectral (coefficient) space, and vice versa. The rationale behind spectral methods is that it is more computationally convenient to perform operations (or, more frequently, multiple operations) in spectral space—that is, do an n−1 n−1 P ′ P yˆi Ti (x). yˆi Ti (x), such as differentiation, to get f ′ (x) = operation to f (x) = i=0

i=0

1 to

be covered in week 13

Michael Daniel Samson

Chebyshev Polynomials and Nodes

Page 3 of 4

MAT 357

Spectral Methods

9 November 2015

Spectral and Pseudospectral Differentiation and Integration For example, from the derivatives of the Chebyshev polynomials: T0′ (x) ≡ 0, T1′ (x) = T0 (x) ≡ 1, and, for n > 1, consider (2) and (4) to show n sin nθ Tn′ (x) = . sin θ To resolve this, the trigonometric identity needed is sin nθ = sin[(n − 2)θ] cos 2θ + cos[(n − 2)θ] sin 2θ = sin[(n − 2)θ](1 − 2 sin2 θ) + 2 cos[(n − 2)θ] sin θ cos θ = sin[(n − 2)θ] + 2 sin θ(cos[(n − 2)θ] cos θ − sin[(n − 2)θ] sin θ) = sin[(n − 2)θ] + 2 sin θ cos[(n − 1)θ]. This gives, Tn′ (x)

n sin[(n − 2)θ] + 2n cos[(n − 1)θ] = = sin θ

( n T ′ (x) + 2nTn−1 (x) n − 2 n−2 2nT1 (x)

n > 2,

(7)

n = 2.

(7) gives the differentiation Tn′ (x) = 2n

n−1 X

k=0 k+n odd

Tk (x) , 1 + δk0

which, in turn, allows for a matrix computation to perform spectral differentiation yˆ′ = Dˆ y where   0 1 0 3 0 5 ··· 0 0 4 0 8 0 · · ·   , D = 0 0 0 6 0 10 · · ·   .. .. .. .. . . . . . . . . . . . . .

′ = 0, since the last row is all zero, or (n − 1) × n, generating a which can be n × n, producing yˆn−1 ′ shorter yˆ , and whose lower triangle is entirely zero. Pseudospectral differentiation can be done on the data by ~y ′ = TDL~y , and higher differentiation is done by multiple applications of a square D, e.g. yˆ′′ = DDˆ y = D2 yˆ and ~y ′′ = TD2 L~y .

Some notes: keeping spectral coefficients would be more efficient for determining the interpolation’s d d dx 2 d derivative at a point on the interval other than at the nodes; F (ξ) = f (x) = f (x). dξ dx dξ b − a dx (7) also gives, for n > 1

′ ′ (x) Tn+1 (x) Tn−1 − , n+1 n−1 which can be used for anti-differentiation—noting Tn (−1) = cos(n arccos(−1)) = cos(nπ) = (−1)n and T2 (x) = 2x2 − 1, Z x T0 (x)dx = x + 1 = T1 (x) + T0 (x),

2Tn (x) =

−1 x

x2 1 1 1 − = T2 (x) − T0 (x), 2 2 4 4 −1 Z x 1 1 (−1)n Tn (x)dx = x + 1 = Tn+1 (x) − Tn−1 (x) + 2 T0 (x), 2(n + 1) 2(n − 1) n −1 −1 Z

T1 (x)dx =

n > 1.

This allows integration (given the value of the antiderivative at −1) by a matrix multiplication (using a sparse (n + 1) × n matrix I∗ , yˆ∗ = I∗ yˆ) where the value of the antiderivative at −1 is added to each term of ~y ∗ = Tˆ y ∗ (as the integration constant). Key to these spectral methods is that D and I∗ are sparse (few nonzero entries) and quickly generated. Michael Daniel Samson

Chebyshev Polynomials and Nodes

Page 4 of 4

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.