Mathematical Methods

Mathematical Methods PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more inform

Views 208 Downloads 6 File size 3MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

Mathematical Methods

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Thu, 31 Oct 2013 14:26:22 UTC

Contents Articles Hypergeometric function

1

Generalized hypergeometric function

11

Sturm–Liouville theory

21

Hermite polynomials

27

Jacobi polynomials

40

Legendre polynomials

43

Chebyshev polynomials

49

Gegenbauer polynomials

62

Laguerre polynomials

64

Eigenfunction

73

References Article Sources and Contributors

76

Image Sources, Licenses and Contributors

77

Article Licenses License

78

Hypergeometric function

Hypergeometric function In mathematics, the Gaussian or ordinary hypergeometric function 2F1(a,b;c;z) is a special function represented by the hypergeometric series, that includes many other special functions as specific or limiting cases. [It] is a solution of a second-order linear ordinary differential equation (ODE). Every second-order linear ODE with three regular singular points can be transformed into this equation. For systematic lists of some of the many thousands of published identities involving the hypergeometric function, see the reference works by Arthur Erdélyi, Wilhelm Magnus, and Fritz Oberhettinger et al. (1953), Abramowitz & Stegun (1965), and Daalhuis (2010).

History The term "hypergeometric series" was first used by John Wallis in his 1655 book Arithmetica Infinitorum. Hypergeometric series were studied by Leonhard Euler, but the first full systematic treatment was given by Carl Friedrich Gauss (1813). Studies in the nineteenth century included those of Ernst Kummer (1836), and the fundamental characterisation by Bernhard Riemann (1857) of the hypergeometric function by means of the differential equation it satisfies. Riemann showed that the second-order differential equation for 2F1(z), examined in the complex plane, could be characterised (on the Riemann sphere) by its three regular singularities. The cases where the solutions are algebraic functions were found by Hermann Schwarz (Schwarz's list).

The hypergeometric series The hypergeometric function is defined for |z|  0 are specified at the outset. In the simplest of cases all coefficients are continuous on the finite closed interval [a,b], and p has continuous derivative. In this simplest of all cases, this function "y" is called a solution if it is continuously differentiable on (a,b) and satisfies the equation (1) at every point in (a,b). In addition, the unknown function y is typically required to satisfy some boundary conditions at a and b. The function w(x), which is sometimes called r(x), is called the "weight" or "density" function. The value of λ is not specified in the equation; finding the values of λ for which there exists a non-trivial solution of (1) satisfying the boundary conditions is part of the problem called the Sturm–Liouville (S–L) problem. Such values of λ when they exist are called the eigenvalues of the boundary value problem defined by (1) and the prescribed set of boundary conditions. The corresponding solutions (for such a λ) are the eigenfunctions of this problem. Under normal assumptions on the coefficient functions p(x), q(x), and w(x) above, they induce a Hermitian differential operator in some function space defined by boundary conditions. The resulting theory of the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in a suitable function space became known as Sturm–Liouville theory. This theory is important in applied mathematics, where S–L problems occur very commonly, particularly when dealing with linear partial differential equations that are separable. A Sturm–Liouville (S–L) problem is said to be regular if p(x), w(x) > 0, and p(x), p'(x), q(x), and w(x) are continuous functions over the finite interval [a, b], and have separated boundary conditions of the form (2)

(3)

Under the assumption that the S–L problem is regular, the main tenet of Sturm–Liouville theory states that: • The eigenvalues λ1, λ2, λ3, ... of the regular Sturm–Liouville problem (1)-(2)-(3) are real and can be ordered such that

• Corresponding to each eigenvalue λn is a unique (up to a normalization constant) eigenfunction yn(x) which has exactly n − 1 zeros in (a, b). The eigenfunction yn(x) is called the n-th fundamental solution satisfying the regular Sturm–Liouville problem (1)-(2)-(3). • The normalized eigenfunctions form an orthonormal basis

in the Hilbert space L2([a, b], w(x)dx). Here δmn is a Kronecker delta. Note that, unless p(x) is continuously differentiable and q(x), w(x) are continuous, the equation has to be understood in a weak sense.

SturmLiouville theory

Sturm–Liouville form The differential equation (1) is said to be in Sturm–Liouville form or self-adjoint form. All second-order linear ordinary differential equations can be recast in the form on the left-hand side of (1) by multiplying both sides of the equation by an appropriate integrating factor (although the same is not true of second-order partial differential equations, or if y is a vector.)

Examples • The Bessel equation:

can be written in Sturm–Liouville form as

• The Legendre equation: can easily be put into Sturm–Liouville form, since D(1 − x2) = −2x, so, the Legendre equation is equivalent to

• An example using an integrating factor: Divide throughout by x3:

Multiplying throughout by an integrating factor of

gives

which can be easily put into Sturm–Liouville form since

so the differential equation is equivalent to

• The integrating factor for a general second order differential equation:

multiplying through by the integrating factor

and then collecting gives the Sturm–Liouville form:

or, explicitly,

22

SturmLiouville theory

23

Sturm–Liouville equations as self-adjoint differential operators The map

can be viewed as a linear operator mapping a function u to another function Lu. We may study this linear operator in the context of functional analysis. In fact, equation (1) can be written as

This is precisely the eigenvalue problem; that is, we are trying to find the eigenvalues λ1, λ2, λ3, ... and the corresponding eigenvectors u1, u2, u3, ... of the L operator. The proper setting for this problem is the Hilbert space L2([a, b], w(x) dx) with scalar product

In this space L is defined on sufficiently smooth functions which satisfy the above boundary conditions. Moreover, L gives rise to a self-adjoint operator. This can be seen formally by using integration by parts twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of L corresponding to different eigenvalues are orthogonal. However, this operator is unbounded and hence existence of an orthonormal basis of eigenfunctions is not evident. To overcome this problem one looks at the resolvent

where z is chosen to be some real number which is not an eigenvalue. Then, computing the resolvent amounts to solving the inhomogeneous equation, which can be done using the variation of parameters formula. This shows that the resolvent is an integral operator with a continuous symmetric kernel (the Green's function of the problem). As a consequence of the Arzelà–Ascoli theorem this integral operator is compact and existence of a sequence of eigenvalues αn which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators. Finally, note that is equivalent to . If the interval is unbounded, or if the coefficients have singularities at the boundary points, one calls L singular. In this case the spectrum does no longer consist of eigenvalues alone and can contain a continuous component. There is still an associated eigenfunction expansion (similar to Fourier series versus Fourier transform). This is important in quantum mechanics, since the one-dimensional time-independent Schrödinger equation is a special case of a S–L equation.

Example We wish to find a function u(x) which solves the following Sturm–Liouville problem: (4)

where the unknowns are λ and u(x). As above, we must add boundary conditions, we take for example

Observe that if k is any integer, then the function is a solution with eigenvalue λ = k2. We know that the solutions of a S–L problem form an orthogonal basis, and we know from Fourier series that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the S–L problem in this case has no other eigenvectors.

SturmLiouville theory Given the preceding, let us now solve the inhomogeneous problem

with the same boundary conditions. In this case, we must write f(x) = x in a Fourier series. The reader may check, either by integrating ∫exp(ikx)x dx or by consulting a table of Fourier transforms, that we thus obtain

This particular Fourier series is troublesome because of its poor convergence properties. It is not clear a priori whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are "square-summable", the Fourier series converges in L2 which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result which says that Fourier's series converges at every point of differentiability, and at jump points (the function x, considered as a periodic function, has a jump at π) converges to the average of the left and right limits (see convergence of Fourier series). Therefore, by using formula (4), we obtain that the solution is

In this case, we could have found the answer using antidifferentiation. This technique yields u = (x3 − π2x)/6, whose Fourier series agrees with the solution we found. The antidifferentiation technique is no longer useful in most cases when the differential equation is in many variables.

Application to normal modes Certain partial differential equations can be solved with the help of S–L theory. Suppose we are interested in the modes of vibration of a thin membrane, held in a rectangular frame, 0 ≤ x ≤ L1, 0 ≤ y ≤ L2. The equation of motion for the vertical membrane's displacement, W(x, y, t) is given by the wave equation:

The method of separation of variables suggests looking first for solutions of the simple form W = X(x) × Y(y) × T(t). For such a function W the partial differential equation becomes X"/X + Y"/Y = (1/c2)T"/T. Since the three terms of this equation are functions of x,y,t separately, they must be constants. For example, the first term gives X" = λX for a constant . The boundary conditions ("held in a rectangular frame") are W = 0 when x = 0, L1 or y = 0, L2 and define the simplest possible S–L eigenvalue problems as in the example, yielding the "normal mode solutions" for W with harmonic time dependence,

where m and n are non-zero integers, Amn are arbitrary constants, and

The functions Wmn form a basis for the Hilbert space of (generalized) solutions of the wave equation; that is, an arbitrary solution W can be decomposed into a sum of these modes, which vibrate at their individual frequencies . This representation may require a convergent infinite sum.

24

SturmLiouville theory

25

Representation of solutions and numerical calculation The Sturm–Liouville differential equation (1) with boundary conditions may be solved in practice by a variety of numerical methods. In difficult cases, one may need to carry out the intermediate calculations to several hundred decimal places of accuracy in order to obtain the eigenvalues correctly to a few decimal places. 1. Shooting methods.[1][2] These methods proceed by guessing a value of λ, solving an initial value problem defined by the boundary conditions at one endpoint, say, a, of the interval [a, b], comparing the value this solution takes at the other endpoint b with the other desired boundary condition, and finally increasing or decreasing λ as necessary to correct the original value. This strategy is not applicable for locating complex eigenvalues. 2. Finite difference method. 3. The Spectral Parameter Power Series (SPPS) method[3] makes use of a generalization of the following fact about second order ordinary differential equations: if y is a solution which does not vanish at any point of [a,b], then the function

is a solution of the same equation and is linearly independent from y. Further, all solutions are linear combinations of these two solutions. In the SPPS algorithm, one must begin with an arbitrary value λ0* (often λ0* = 0; it does not need to be an eigenvalue) and any solution y0 of (1) with λ = λ0* which does not vanish on [a, b]. (Discussion below of ways to find appropriate y0 and λ0*.) Two sequences of functions X(n)(t), X~(n)(t) on [a, b], referred to as iterated integrals, are defined recursively as follows. First when n = 0, they are taken to be identically equal to 1 on [a, b]. To obtain the next functions they are multiplied alternately by 1/(py02) and wy02 and integrated, specifically for n odd,

for n odd,

for n even,

(5)

for n even,

(6)

when n > 0. The resulting iterated integrals are now applied as coefficients in the following two power series in λ: and Then for any λ (real or complex), u0 and u1 are linearly independent solutions of the corresponding equation (1). (The functions p(x) and q(x) take part in this construction through their influence on the choice of y0.) Next one chooses coefficients c0, c1 so that the combination y = c0u0 + c1u1 satisfies the first boundary condition (2). This is simple to do since X(n)(a) = 0 and X~(n)(a) = 0, for n>0. The values of X(n)(b) and X~(n)(b) provide the values of u0(b) and u1(b) and the derivatives u0'(b) and u1'(b), so the second boundary condition (3) becomes an equation in a power series in λ. For numerical work one may truncate this series to a finite number of terms, producing a calculable polynomial in λ whose roots are approximations of the sought-after eigenvalues. When λ= λ0, this reduces to the original construction described above for a solution linearly independent to a given one. The representations (5),(6) also have theoretical applications in Sturm–Liouville theory.

Construction of a nonvanishing solution The SPPS method can, itself, be used to find a starting solution y0. Consider the equation (p y' )' = μqy; i.e., q, w, and λ are replaced in (1) by 0, −q, and μ respectively. Then the constant function 1 is a nonvanishing solution corresponding to the eigenvalue μ0 = 0. While there is no guarantee that u0 or u1 will not vanish, the complex function y0 = u0 + iu1 will never vanish because two linearly independent solutions of a regular S–L equation cannot vanish simultaneously as a consequence of the Sturm separation theorem. This trick gives a solution y0 of (1) for the

SturmLiouville theory

26

value λ0 = 0. In practice if (1) has real coefficients, the solutions based on y0 will have very small imaginary parts which must be discarded.

Application to PDEs For a linear second order in one spatial dimension and first order in time of the form:

Let us apply separation of variables, which in doing we must impose that:

Then our above PDE may be written as:

Where Since, by definition,

and and

are independent of time

and

and

are independent of position

,

then both sides of the above equation must be equal to a constant:

The first of these equations must be solved as a Sturm–Liouville problem. Since there is no general analytic (exact) solution to Sturm–Liouville problems, we can assume we already have the solution to this problem, that is, we have the eigenfunctions and eigenvalues . The second of these equations can be analytically solved once the eigenvalues are known.

Where:

SturmLiouville theory

References [1] J. D. Pryce, Numerical Solution of Sturm–Liouville Problems, Clarendon Press, Oxford, 1993. [2] V. Ledoux, M. Van Daele, G. Vanden Berghe, "Efficient computation of high index Sturm–Liouville eigenvalues for problems in physics," Comput. Phys. Comm. 180, 2009, 532–554. [3] V. V. Kravchenko, R. M. Porter, "Spectral parameter power series for Sturm–Liouville problems," Mathematical Methods in the Applied Sciences (MMAS) 33, 2010, 459–468

Further reading • Hazewinkel, Michiel, ed. (2001), "Sturm-Liouville theory" (http://www.encyclopediaofmath.org/index. php?title=p/s130620), Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 • Hartman, Peter (2002). Ordinary Differential Equations (2 ed.). Philadelphia: SIAM. ISBN 978-0-89871-510-1. • Polyanin, A. D. and Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2 ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2. • Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems (http://www.mat.univie.ac.at/ ~gerald/ftp/book-ode/). Providence: American Mathematical Society. ISBN 978-0-8218-8328-0. (Chapter 5) • Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators (http://www.mat.univie.ac.at/~gerald/ftp/book-schroe/). Providence: American Mathematical Society. ISBN 978-0-8218-4660-5. (see Chapter 9 for singular S–L operators and connections with quantum mechanics) • Zettl, Anton (2005). Sturm–Liouville Theory. Providence: American Mathematical Society. ISBN 0-8218-3905-5.

Hermite polynomials In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence that arise in probability, such as the Edgeworth series; in combinatorics, as an example of an Appell sequence, obeying the umbral calculus; in numerical analysis as Gaussian quadrature; in finite element methods as shape functions for beams; and in physics, where they give rise to the eigenstates of the quantum harmonic oscillator. They are also used in systems theory in connection with nonlinear operations on Gaussian noise. They are named after Charles Hermite (1864)[1] although they were studied earlier by Laplace (1810) and Chebyshev (1859).[2]

Definition There are two different ways of standardizing the Hermite polynomials:

(the "probabilists' Hermite polynomials"); and

(the "physicists' Hermite polynomials"). These two definitions are not exactly identical; each one is a rescaling of the other,

These are Hermite polynomial sequences of different variances; see the material on variances below. The notation He and H is that used in the standard references Tom H. Koornwinder, Roderick S. C. Wong, and Roelof Koekoek et al. (2010) and Abramowitz & Stegun. The polynomials Hen are sometimes denoted by Hn,

27

Hermite polynomials

28

especially in probability theory, because

is the probability density function for the normal distribution with expected value 0 and standard deviation 1. The first eleven probabilists' Hermite polynomials are:

The first six (probabilists') Hermite polynomials Hen(x).

and the first eleven physicists' Hermite polynomials are:

The first six (physicists') Hermite polynomials Hn(x).

Hermite polynomials

29

Properties Hn is a polynomial of degree n. The probabilists' version He has leading coefficient 1, while the physicists' version H has leading coefficient 2n.

Orthogonality Hn(x) and Hen(x) are nth-degree polynomials for n = 0, 1, 2, 3, .... These polynomials are orthogonal with respect to the weight function (measure)    (He) or    (H) i.e., we have

when m ≠ n. Furthermore,    (probabilist) or    (physicist). The probabilist polynomials are thus orthogonal with respect to the standard normal probability density function.

Completeness The Hermite polynomials (probabilist or physicist) form an orthogonal basis of the Hilbert space of functions satisfying

in which the inner product is given by the integral including the Gaussian weight function w(x) defined in the preceding section,

An orthogonal basis for L2(R, w(x) dx) is a complete orthogonal system. For an orthogonal system, completeness is equivalent to the fact that the 0 function is the only function ƒ ∈ L2(R, w(x) dx) orthogonal to all functions in the system. Since the linear span of Hermite polynomials is the space of all polynomials, one has to show (in physicist case) that if ƒ satisfies

for every n ≥ 0, then ƒ = 0. One possible way to do it is to see that the entire function

vanishes identically. The fact that F(it) = 0 for every t real means that the Fourier transform of ƒ(x) exp(−x2) is 0, hence ƒ is 0 almost everywhere. Variants of the above completeness proof apply to other weights with exponential decay. In the Hermite case, it is also possible to prove an explicit identity that implies completeness (see

Hermite polynomials

30

"Completeness relation" below). An equivalent formulation of the fact that Hermite polynomials are an orthogonal basis for L2(R, w(x) dx) consists in introducing Hermite functions (see below), and in saying that the Hermite functions are an orthonormal basis for L2(R).

Hermite's differential equation The probabilists' Hermite polynomials are solutions of the differential equation

where λ is a constant, with the boundary conditions that u should be polynomially bounded at infinity. With these boundary conditions, the equation has solutions only if λ is a non-negative integer, and up to an overall scaling, the solution is uniquely given by u(x) = Heλ(x). Rewriting the differential equation as an eigenvalue problem solutions are the eigenfunctions of the differential operator L. This eigenvalue problem is called the Hermite equation, although the term is also used for the closely related equation

whose solutions are the physicists' Hermite polynomials. With more general boundary conditions, the Hermite polynomials can be generalized to obtain more general analytic functions Heλ(z) for λ a complex index. An explicit formula can be given in terms of a contour integral (Courant & Hilbert 1953).

Recursion relation The sequence of Hermite polynomials also satisfies the recursion (probabilist) Individual coefficients are related by the following recursion formula:

and a[0,0]=1, a[1,0]=0, a[1,1]=1. (Assuming :

) (physicist)

Individual coefficients are related by the following recursion formula:

and a[0,0]=1, a[1,0]=0, a[1,1]=2. The Hermite polynomials constitute an Appell sequence, i.e., they are a polynomial sequence satisfying the identity (probabilist) (physicist) or, equivalently, by Taylor expanding, (probabilist)

Hermite polynomials

31

(physicist) In consequence, for the m-th derivatives the following relations hold: (probabilist) (physicist) It follows that the Hermite polynomials also satisfy the recurrence relation (probabilist) (physicist) These last relations, together with the initial polynomials H0(x) and H1(x), can be used in practice to compute the polynomials quickly. Turán's inequalities are

Moreover, the following multiplication theorem holds:

Explicit expression The physicists' Hermite polynomials can be written explicitly as

for even values of n and

for odd values of n. These two equations may be combined into one using the floor function:

The probabilists' Hermite polynomials He have similar formulas, which may be obtained from these by replacing the power of 2x with the corresponding power of (√2)x, and multiplying the entire sum by 2−n/2.

Hermite polynomials

32

Generating function The Hermite polynomials are given by the exponential generating function (probabilist) (physicist). This equality is valid for all x, t complex, and can be obtained by writing the Taylor expansion at x of the entire function z → exp(−z2) (in physicist's case). One can also derive the (physicist's) generating function by using Cauchy's Integral Formula to write the Hermite polynomials as

Using this in the sum

, one can evaluate the remaining integral using the calculus of residues and

arrive at the desired generating function.

Expected values If X is a random variable with a normal distribution with standard deviation 1 and expected value μ then (probabilist) The moments of the standard normal may be read off directly from the relation for even indices

where

is the double factorial. Note that the above expression is a special case of the representation of

the probabilists' Hermite polynomials as moments

Asymptotic expansion Asymptotically, as

tends to infinity, the expansion (physicist[3])

holds true. For certain cases concerning a wider range of evaluation, it is necessary to include a factor for changing amplitude

Which, using Stirling's approximation, can be further simplified, in the limit, to

This expansion is needed to resolve the wave-function of a quantum harmonic oscillator such that it agrees with the classical approximation in the limit of the correspondence principle. A finer approximation, which takes into account the uneven spacing of the zeros near the edges, makes use of the substitution , for , with which one has the uniform approximation

Hermite polynomials

33

Similar approximations hold for the monotonic and transition regions. Specifically, if

for

then

while for

with

complex and bounded then

where Ai(·) is the Airy function of the first kind.

Special Values The Hermite polynomials evaluated at zero argument

are called Hermite numbers.

In terms of the probabilist's polynomials this translates to

Relations to other functions Laguerre polynomials The Hermite polynomials can be expressed as a special case of the Laguerre polynomials. (physicist)

(physicist)

Relation to confluent hypergeometric functions The Hermite polynomials can be expressed as a special case of the parabolic cylinder functions. (physicist) where

is Whittaker's confluent hypergeometric function. Similarly, (physicist) (physicist)

where

is Kummer's confluent hypergeometric function.

Hermite polynomials

34

Differential operator representation The probabilists' Hermite polynomials satisfy the identity

where D represents differentiation with respect to x, and the exponential is interpreted by expanding it as a power series. There are no delicate questions of convergence of this series when it operates on polynomials, since all but finitely many terms vanish. Since the power series coefficients of the exponential are well known, and higher order derivatives of the monomial xn can be written down explicitly, this differential operator representation gives rise to a concrete formula for the coefficients of Hn that can be used to quickly compute these polynomials. Since the formal expression for the Weierstrass transform W is eD2, we see that the Weierstrass transform of (√2)nHen(x/√2) is xn. Essentially the Weierstrass transform thus turns a series of Hermite polynomials into a corresponding Maclaurin series. The existence of some formal power series g(D) with nonzero constant coefficient, such that Hen(x) = g(D)xn, is another equivalent to the statement that these polynomials form an Appell sequence−−cf. W. Since they are an Appell sequence, they are a fortiori a Sheffer sequence.

Contour integral representation The Hermite polynomials have a representation in terms of a contour integral, as (probabilist) (physicist) with the contour encircling the origin.

Generalizations The (probabilists') Hermite polynomials defined above are orthogonal with respect to the standard normal probability distribution, whose density function is

which has expected value 0 and variance 1. One may speak of Hermite polynomials

of variance α, where α is any positive number. These are orthogonal with respect to the normal probability distribution whose density function is

They are given by

In particular, the physicists' Hermite polynomials are

If

Hermite polynomials

35

then the polynomial sequence whose nth term is

is the umbral composition of the two polynomial sequences, and it can be shown to satisfy the identities

and

The last identity is expressed by saying that this parameterized family of polynomial sequences is a cross-sequence.

"Negative variance" Since polynomial sequences form a group under the operation of umbral composition, one may denote by

the sequence that is inverse to the one similarly denoted but without the minus sign, and thus speak of Hermite polynomials of negative variance. For α > 0, the coefficients of Hen[−α](x) are just the absolute values of the corresponding coefficients of Hen[α](x). These arise as moments of normal probability distributions: The nth moment of the normal distribution with expected value μ and variance σ2 is

where X is a random variable with the specified normal distribution. A special case of the cross-sequence identity then says that

Applications Hermite functions One can define the Hermite functions from the physicists' polynomials:

Since these functions contain the square root of the weight function, and have been scaled appropriately, they are orthonormal:

and form an orthonormal basis of L2(R). This fact is equivalent to the corresponding statement for Hermite polynomials (see above). The Hermite functions are closely related to the Whittaker function (Whittaker and Watson, 1962)

and thereby to other parabolic cylinder functions. The Hermite functions satisfy the differential equation:

:

Hermite polynomials

36

This equation is equivalent to the Schrödinger equation for a harmonic oscillator in quantum mechanics, so these functions are the eigenfunctions.

Hermite functions 0 (black), 1 (red), 2 (blue), 3 (yellow), 4 (green), and 5 (magenta).

Hermite functions 0 (black), 2 (blue), 4 (green), and 50 (magenta).

Recursion relation Following recursion relations of Hermite polynomials, the Hermite functions obey

as well as

Extending the first relation to the arbitrary m-th derivatives for any positive integer m leads to

This formula can be used in connection with the recurrence relations for Hen and the Hermite functions efficiently.

to calculate any derivative of

Hermite polynomials

Cramér's inequality The Hermite functions satisfy the following bound due to Harald Cramér

for x real, where the constant K is less than 1.086435.

Hermite functions as eigenfunctions of the Fourier transform The Hermite functions ψn(x) are a set of eigenfunctions of the continuous Fourier transform. To see this, take the physicist's version of the generating function and multiply by exp(−x 2/2). This gives

Choosing the unitary representation of the Fourier transform, the Fourier transform of the left hand side is given by

The Fourier transform of the right hand side is given by

Equating like powers of t in the transformed versions of the left- and right-hand sides finally yields The Hermite functions ψn(x) are thus an orthonormal basis of L2(R) which diagonalizes the Fourier transform operator. In this case, we chose the unitary version of the Fourier transform, so the eigenvalues are (−i) n. The ensuing resolution of the identity then serves to define powers, including fractional ones, of the Fourier transform, to wit a Fractional Fourier transform generalization.

Combinatorial interpretation of coefficients In the Hermite polynomial Hen(x) of variance 1, the absolute value of the coefficient of xk is the number of (unordered) partitions of an n-member set into k singletons and (n − k)/2 (unordered) pairs. The sum of the absolute values of the coefficients gives the total number of partitions into singletons and pairs, the so-called telephone numbers 1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, ... (sequence A000085 in OEIS). These numbers may also be expressed as a special value of the Hermite polynomials

37

Hermite polynomials

Completeness relation The Christoffel–Darboux formula for Hermite polynomials reads

Moreover, the following identity holds in the sense of distributions

where δ is the Dirac delta function, (ψn) the Hermite functions, and δ(x − y) represents the Lebesgue measure on the line y = x in R2, normalized so that its projection on the horizontal axis is the usual Lebesgue measure. This distributional identity follows by letting u → 1 in Mehler's formula, valid when −1  0. The asymptotics of the Jacobi polynomials near the points ±1 is given by the Mehler–Heine formula

where the limits are uniform for z in a bounded domain. The asymptotics outside [−1, 1] is less explicit.

Applications Wigner d-matrix The expression (1) allows the expression of the Wigner d-matrix djm’,m(φ) (for 0 ≤ φ ≤ 4π) in terms of Jacobi polynomials:

Notes [1] The definition is in IV.1; the differential equation – in IV.2; Rodrigues' formula is in IV.3; the generating function is in IV.4; the recurrent relation is in IV.5.

Further reading • Andrews, George E.; Askey, Richard; Roy, Ranjan (1999), Special functions, Encyclopedia of Mathematics and its Applications 71, Cambridge University Press, ISBN 978-0-521-62321-6; 978-0-521-78988-2 Check |isbn= value (help), MR  1688958 (http://www.ams.org/mathscinet-getitem?mr=1688958) • Koornwinder, Tom H.; Wong, Roderick S. C.; Koekoek, Roelof; Swarttouw, René F. (2010), "Orthogonal Polynomials" (http://dlmf.nist.gov/18), in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0521192255, MR 2723248 (http://www.ams.org/mathscinet-getitem?mr=2723248)

42

Jacobi polynomials

External links • Weisstein, Eric W., " Jacobi Polynomial (http://mathworld.wolfram.com/JacobiPolynomial.html)", MathWorld.

Legendre polynomials In mathematics, Legendre functions are solutions to Legendre's differential equation:

They are named after Adrien-Marie Legendre. This ordinary differential equation is frequently encountered in physics and other technical fields. In particular, it occurs when solving Laplace's equation (and related partial differential equations) in spherical coordinates. The Legendre differential equation may be solved using the standard power series method. The equation has regular singular points at x = ±1 so, in general, a series solution about the origin will only converge for |x|