next up previous
Next: Bibliography

QUANTIZATION AND A GREEN'S FUNCTION FOR SYSTEMS OF LINEAR ORDINARY DIFFERENTIAL EQUATIONS 1

Harold V. McIntosh

Escuela Superior de Fisica y Matematicas,
Instituto Politecnico Nacional,
Mexico 14, D.F. Mexico

Instituto Nacional de Energia Nuclear,
Avenida Insurgentes Sur 1079,
Mexico l8, D.F. Mexico

One of the traditional puzzles for students of quantum mechanics is the reconciliation of the quantification principle that wave functions must be square integrable with the reality that continuum wave functions do not respect this requirement. By protesting that neither are they quantized the problem can be sidestepped, although some ingenuity may still be required to find suitable boundary conditions. Further difficulties await later on when particular systems are studied in more detail. Sometimes all the solutions, and not just some of them, are square integrable. Then it is necessary to resort to another principle, such as continuity or finiteness of the wave function, to achieve quantization. Examples where such steps have to be taken can be found both in the Schrödinger equation and the Dirac equation. The ground state of the hydrogen atom, the hydrogen atom in Minkowski space, the theta component of angular momentum, all pose problems for the Schrödinger equation. Finiteness of wavefunction alone is not a reliable principle because it fails in the radial equation of the Dirac hydrogen atom. Thus the quantizing conditions which have been invoked for one potential or another seen to be quite varied.

Singular potentials, the criteria for which vary somewhat between the Schrödinger equation, the Dirac equation, and the Klein-Gordon equation, originate another variant of the quantization problem. Continuum wave functions forming the positive energy states of a potential which vanishes at large distances may not be square integrable, but they are at least irredundant in the sense of linear independence in a vector space. Singular potentials exhibit another type of continuum, wherein every energy possesses square-integrable solutions to the wave equation, but they are all linearly dependent on a discrete subset. It is often felt that such potentials are ``unphysical'' but the appelation is neither true nor particularly admissible as a pretext for not understanding the mathematical properties of such solutions. Physical occurrences of such potentials include the Dirac equation for superheavy nuclei the ground states of magnetic monopoles, and higher multipole approximations to ordinary potentials. Even if magnetic monopoles do not exist, the same difficulty is still to be found in all the higher multipoles; for instance in electric dipoles.

One wonders why the requirement of square-integrability as the condition of quantization should be so prevalent? No doubt a substantial part of the reason is pedagogical. Square integrability is a rather dramatic characteristic of the solutions of a wave equation, fairly easy to explain in terms of probability, and quite in accordance with the historical development of the philosophy of quantum mechanics. Once students are convinced of the importance of square integrability, the foundations of the course have been laid, and the applications can begin. To dwell further on the foundations, particularly if much mathematics is required, disturbs the balance between speculation and results, and so the matter is usually not pursued further. By the time that the really important discrepancies begin to occur, the simple statement of the quantization principle has become so ingrained as an inviolable axiom, much as happens to the ``no crossing'' rule, that it is hard to return to basic principles.

If square integrability is not an adequate principle, what really is the principle? For an understanding of this point it seems desirable first to separate quantum dynamics from quantum observability.

Observations are described with the aid of wave functions, in the sense that the square of the amplitude of a wave function is taken to be the probability density for finding a particle, or perhaps an assemblage of particles, in a certain place at a certain time. Properties of the system, such as its energy, its momentum, or some other physical characteristic can be calculated from the wave function by means of a bilinear form and appropriate operators For such calculations to be possible, the wave functions have to satisfy certain restrictions; for example square integrability is the requirement that there be a unit probability of finding the particle somewhere, anywhere. Consequently the requirement that wave functions should belong to Hilbert space has been widely taken as a basic principle of quantum mechanics. A whole theory of probability is then overlaid on this Hilbert space: the elementary probabilities are taken from bilinear operators on the Hilbert space, following which their origin is of no great concern.

Dynamics, on the other hand, is described by a wave equation, which follows out the temporal development of the wave functions. Even if the system is static, its stasis is described in a special way, by a time-independent wave equation. There are two styles which can be used for this description: operators or functional analysis. On the one hand there is matrix mechanics or the ``Heisenberg picture'', while on the other there are partial differential equations comprising wave mechanics and the ``Schrödinger picture''. The choice between these is a matter of taste, although somethings are better expressed one way and others in the other. Differential equation theory tends to be more concrete and computational, giving specific examples and information, while functional analysis allows the information to be summarized from an algebraic viewpoint, dealing with dimensions of solution spaces, mappings between them, and so on.

Having established dynamics and interpretation as two distinct phases of quantum mechanics, the discrepancies between the two have to be taken into account. At the same time it is possible to locate the ``quantization principle'' and to identify the source of confusion with respect to its application. It is essential to observe that dynamics and interpretation work with two different structures having quite different characteristics. Dynamics is a local theory, dealing with differential equations and derivatives, boundary values and initial conditions, and such like. Interpretation is a global theory, dealing with integrals and integrability, functions and bases, probabilities and statistics. In turn, dynamics can be taken as an expression of the automorphisms of the function spaces of interpretation theory.

The quantization principle is that solutions of the dynamical equations must form a basis for the Hilbert space upon which the interpretation is performed.

Most of the confusion to which we have alluded arises from supposing that a basis for Hilbert space either must or ought to belong to Hilbert space.

The selection of a basis for Hilbert space is an old story. Physicists distinguish ``wave functions'' and ``wave packets'' for just this reason, although the way the motivation is usually stated is that they wish to localize their particle. A plane wave may satisfy the Schrödinger equation, but cannot represent a particle because it has a constant amplitude everywhere, even at very remote distances. A Fourier synthesis of plane waves may localize the particle, evidenced by the square integrability of the represented function.

Even though a Hilbert space basis of wave packets could be built up, it is still conceptually simpler to think in terms of plane waves.

Probably the reason that this distinction was lost pedagogically was to avoid burdening an elementary exposition with all the machinery of Fourier analysis, especially in those applications where bound states predominate, and a single solution is at once square integrable and stationary. Mathematical authors of books on Hilbert space have not particularly aided the cause, because of their proclivity for self-contained axiomatic systems. Hilbert space bases of Hilbert space are given a careful analysis, bounded operators receive a preferential treatment; but even such fundamental operators as the derivative wreak havoc with such a restricted theory. Such theories were not precisely what was needed.

Historically the different approaches of Dirac and von Neumann illustrate the contrast between a formalistic but readily usable approach and a mathematically accurate but somewhat complicated exposition. The growth of Schwartz' distribution theory has reconciled many of the technical discrepancies between those two extremes of approach, but the real difficulty has always been more philosophical or conceptual. From the beginning there has been fairly adequate mathematical machinery available once it was clear on what to use it, notably in the form of the theory of the Stieltjes integral.

At least for situations in which the dynamical equation can be written as a set of ordinary linear differential equations there is a very interesting explicit construction connecting the interpretation space of square integrable functions and the solution space for a set of differential equations. The construction originated with Weyl's dissertation [1] of 1910, played an important role in Schrödinger's formulation of his explanation of quantization, and eventually received an extended application when Titchmarsh and a series of his students began to generalize these concepts at mid-century. These trends were analyzed in Loebl's third volume [2] of Group Theory and its Applications which should be consulted as a predecessor to the present article.

In the Loebl article a single second order differential equation was chosen as an example, because it sufficed to explain how Green's formula establishes a mapping between function space and the space of boundary values for a differential equation. Such a low dimension simplifies the exposition, but at the same time there occur some obscuring simplifications. The fact that Green's function is a scalar is one of them, the fact that some aspects are elegantly formulated in terms of analytic functions of a single complex variable is another.

To present a discussion of some of the details of the higher order systems for Per-Olov Löwdin's anniversary volume somehow seems appropriate, because they involve group theory, projection operators, matrix partitioning, and long forgotten papers which somehow contain all that a contemporary young mathematician could desire.

Birkhoff and Langer [3] showed how to apply Sturm Liouville theory to systems of first order ordinary differential equations, with the objective to expand functions defined over an interval in terms of the eigenfunctions of the system over the same interval. For purposes of discussion, as well as for numerical integration, the system could be written in standard form

\begin{displaymath}\frac{dZ}{dx} = MZ + F
\end{displaymath} (1)

Z would be a vector if the intention were to treat a system of equations, but it is better to make Z a square matrix so that all the linearly independent solutions arising from an arbitrary initial condition can be obtained at once. Series solutions of such an equation, in terms of the matrizant, have been known since the last century.

To obtain Green's formula or the Lagrange identity, an adjoint equation is needed, and will generally require a matrix coefficient for the derivative term. For that reason the canonical form of the equation can be introduced:

\begin{displaymath}\alpha\frac{dZ}{dx} + \{\beta + \frac{1}{2} \frac{d\alpha}{dx}\}Z
= \lambda \gamma Z + R
\end{displaymath} (2)

When F and R are zero, the equation is called homogeneous; otherwise inhomogeneous. If the homogeneous equation 2) is written in operator form

\begin{eqnarray*}L(Z) & = & \lambda\gamma Z
\end{eqnarray*}


there is an adjoint equation

\begin{displaymath}-\alpha^T\frac{dW}{dx} + \{\beta^T - \frac{1}{2}\frac{d\alpha^T}{dx}\} W
= \lambda \gamma^T W
\end{displaymath} (3)

whose operator form is

\begin{eqnarray*}M(Z) & = & \lambda \gamma^T Z.
\end{eqnarray*}


Obviously, if

\begin{eqnarray*}\alpha & = & -\alpha^T \\
\beta & = & \beta^T \\
\gamma & = & \gamma^T.
\end{eqnarray*}


the adjoint equation will be the same as the original equation and deserve to be called self-adjoint. The requirement that $\gamma$ be symmetric is not particularly important, but will usually be imposed to get a symmetric inner product in function space. Likewise it is not necessary to insist that $\alpha$ be nonsingular (and hence that the system be of even order) but the contrary assumption would effectively lower the order of the system, at least at the point of singularity, thereby complicating the analysis.

By direct substitution it can be established that

\begin{eqnarray*}Z^A & = & (\alpha Z)^{-1\ T},
\end{eqnarray*}


which we could call the adjoint of Z, satisfies the adjoint equation. Therefore, given a self-adjoint differential equation, Z and ZA can differ at most by a constant multiplicative factor which would have to be the discrepancy in their initial values. From this much alone we can see that the solution of a self-adjoint system of equations is going to have some special properties.

Once the adjoint of an operator has been properly defined, Green's formula follows from a straightforward calculation with two vector solutions $\phi$ and $\psi$.

\begin{displaymath}\int_b^a\{\phi^TL(\psi)-M(\phi)^T\psi\}dx
= \phi^T\alpha\psi\vert _b - \phi^T\alpha\psi\vert _a.
\end{displaymath} (4)

The formula can be given a real or complex form according to whether the symmetric or the hermitean transpose are used. Moreover, if these vectors are eigenfunctions,

\begin{eqnarray*}L(\psi) & = & \lambda \gamma \psi \\
M(\phi) & = & \mu \gamma^T \phi.
\end{eqnarray*}


Then

\begin{displaymath}(\lambda - \mu) \int_a^b\phi^T\gamma\psi dx = \phi^T\alpha\psi\vert _a^b.
\end{displaymath} (5)

This formula relates inner products in function space to inner products in boundary value space. If $\phi$ and $\psi$ are taken to be matrices rather than vectors, the result is a relation between Gram matrices.

For real solutions belonging to equal eigenvalues this formula expresses a conservation equation. For a single second order equation it expresses the constancy of the Wronskian of two solutions, whereas in general it states the constancy of a bilinear version of the Wronskian of two solutions, from which other multilinear invariants, including the Wronskian, can be deduced. It is more interesting that when $\phi$ and $\psi$ are matrices, the result states that the solution matrices conserve a certain bilinear form much as orthogonal matrices conserve distance. The important difference is that the conserved bilinear form is antisymmetric rather than positive definite, so that the solution matrix is required to be symplectic. This condition is particularly evident when the system is written in canonical form and $\alpha$ is the constant unit antisymmetric matrix.

Symplectic matrices have characteristic properties, just as do unitary and orthogonal matrices. Their eigenvalues occur in reciprocal pairs with equal multiplicities. Their eigenvectors are orthogonal with respect to the metric matrix $\alpha$. These results are not as dramatic as for unitary or orthogonal matrices because the eigenvalues do not need to have absolute value 1, so that a set of symplectic matrices would not usually generate a compact group. Nevertheless they are necessary and sufficient conditions for a complete characterization, and they do have some further consequences. For example, one conclusion is that of 2p linearly independent solutions over a semiinfinite interval, at least p of them must be square integrable, a result which is important for developing the theory further.

Green's formula is the link through which many properties of the solution of a system of differential equations may be established by allowing passage between function space and boundary value space, quite aside from its use in establishing the symplectic nature of the solution matrix for the system. Perhaps the next interesting result after Green's formula is the derivation of Green's function, which relates the solutions of an inhomogeneous equation to the inhomogeneous term and the solutions of the corresponding homogeneous equation. Green's functions have been determined for an extremely wide variety of differential equations. For this, it seems strange that the explicit form for a first order matrix equation is not more readily accessible, but it does not seem to be found in several of the more widely used textbooks.

It is well known that if

\begin{eqnarray*}\frac{dZ}{dx}& = & M Z + F,
\end{eqnarray*}


and if

\begin{eqnarray*}\frac{\partial G(x,x_0)}{\partial x} & = & M(x)G(x,x_0) \\
G(x_0,x_0) & = & {\bf 1}
\end{eqnarray*}


is the solution of the homogeneous equation from unit initial conditions, then

\begin{displaymath}Z(x) = G(x,x_0)Z(x_o) + \int_{x_0}^x G(x.\sigma)F(\sigma)d\sigma
\end{displaymath} (6)

is the solution of the inhomogeneous equation.

If the canonical form were used instead, we would have to write

\begin{eqnarray*}M & = & \alpha^{-1}(-\beta+\lambda\gamma) \\
F & = & \alpha^{-1}R
\end{eqnarray*}


and then return to the standard formulation. But this would only mean that

\begin{eqnarray*}\alpha \frac{\partial G}{\partial x} + \beta G & = & \lambda \gamma G \\
G(x_0,x_0) & = & {\bf 1}
\end{eqnarray*}


so that in either event, G, would be a solution of the homogeneous equation from the unit matrix as initial condition. As a consequence,

 \begin{displaymath}Z(x) = G(x,x_0)Z(x_0) +
\int_{x_0}^x G(x,\sigma)\alpha^{-1}(\sigma) R(\sigma)d\sigma
\end{displaymath} (7)

The only trace of the canonical form lies in the presence of the factor $\alpha^{-1}$ under the integral. Usually it is said that G is the kernel of an integral operator, or that it is Green's function for a Volterra, which is to say an initial value, type of equation.

To solve a Sturm-Liouville, boundary value problem, we would begin by referring the value of the solution at an arbitrary point to the boundary values at the points a and b:

\begin{eqnarray*}Z(a) & = &
G(a,x)Z(x) + \int_x^a G(a,\sigma)\alpha^{-1}(\sigm...
...)Z(x) + \int_x^b G(b,\sigma)\alpha^{-1}(\sigma)R(\sigma)d\sigma
\end{eqnarray*}


However, boundary conditions are probably more interesting than boundary values but they can be accommodated by using the metric matrix $\alpha$ and suitable vectors ${\bf a}$ and ${\bf b}$. We require

\begin{eqnarray*}{\bf a}^T\alpha Z(a) & = & 0 \\
{\bf b}^T\alpha Z(b) & = & 0
\end{eqnarray*}


which eventually results in two matrix equations

\begin{eqnarray*}0 &=& {\bf a}^T\alpha(a)G(a,x)Z(x) +
\int_x^a {\bf a}^T \alph...
...bf b}^T \alpha(b)G(b,\sigma)\alpha^{-1}(\sigma)R(\sigma)d\sigma
\end{eqnarray*}


They can be combined into a single matrix equation by using the partitioning technique and writing explicit submatrices:

\begin{eqnarray*}0 &=& \left(
\begin{array}{c}
{\bf a}^T\alpha(a)G(a,x) \\ {\...
...sigma)\alpha^{-1}(\sigma)R(\sigma)d\sigma
\end{array} \right)
\end{eqnarray*}


Fortunately, enough of an explicit form for the inverse of the coefficient of Z(x) can be written to be useful. Say that ${\bf a}^T\alpha(a)$ has r rows, and that the system contains 2p equations. Then there are 2p-r columns forming a matrix A for which

\begin{eqnarray*}{\bf a}^T\alpha(a)A & = & 0
\end{eqnarray*}


as well as r columns forming a matrix B for which

\begin{eqnarray*}{\bf b}^T\alpha(b)B & = & 0
\end{eqnarray*}


The subsequent results are not affected by the fact that these two matrices are not unique.

After a bit of study we arrive at the result

\begin{eqnarray*}\lefteqn {\left( \begin{array}{c}
{\bf a}^T\alpha(a)G(a,x) \\...
...\\
0 & ({\bf b}^T\alpha(b)G(b,a)A)^{-1}
\end{array} \right)
\end{eqnarray*}


In order to simplify several subsequent formulas it is convenient to introduce the definitions

\begin{eqnarray*}\Delta_{11} & = & {\bf a}^T\alpha(a)G(a,b)B \\
\Delta_{22} & = & {\bf b}^T\alpha(b)G(b,a)A
\end{eqnarray*}


With their help we can begin by writing a fairly explicit form for Z

\begin{eqnarray*}Z(x) & = & -\int_x^aG(x,b)B\Delta_{11}^{-1}{\bf a}^T\alpha(a)
...
...bf b}^T\alpha(b)
G(b,\sigma)\alpha^{-1}(\sigma)R(\sigma)d\sigma
\end{eqnarray*}


which can be made into a single integral

\begin{displaymath}Z(x) = \int_a^b\Gamma(x,\sigma)R(\sigma)d\sigma
\end{displaymath} (8)

by defining

\begin{eqnarray*}\Gamma(x,\sigma) & = & \left. \begin{array}{ccc}
G(x,b)B\Delt...
...-1}(\sigma)
& & x \leq \sigma \leq b \\
\end{array} \right.
\end{eqnarray*}


$\Gamma(x,\sigma)$ is a matrix version of Green's function, which can serve as the kernel in a Fredholm type of equation. Some interesting observations follow from defining

\begin{eqnarray*}P_> & = & G(\sigma,b)B\Delta_{11}^{-1}
{\bf a}^T\alpha(a)G(a,\...
...lta_{22}^{-1}
{\bf b}^T\alpha(b)G(b,\sigma)\alpha^{-l}(\sigma)
\end{eqnarray*}


First, we get the multiplication table

\begin{displaymath}\begin{array}{ccc}
P_>\alpha(\sigma)P_> = P_> & \ & P_<\alph...
...ha(\sigma)P_< = 0 & \ &P_<\alpha(\sigma)P_< = P_<
\end{array} \end{displaymath}

taken with respect to the metric matrix /alpha. From this we conclude that the P's are orthogonal idempotents. Then again, we find

\begin{eqnarray*}\Gamma(x,\sigma) & = & \left. \begin{array}{ccc}
G(x,\sigma)P...
...-G(x,\sigma)P_>(\sigma) & & x \leq \sigma
\end{array} \right.
\end{eqnarray*}


which makes the Fredholm Green's function a projection of the Volterra Green's function. It can even be written as a left hand projection.

\begin{eqnarray*}\Gamma(x,\sigma) & = & \left. \begin{array}{ccc}
P_>(x)\alpha...
...gma)\alpha^{-1}(\sigma) & & x \leq \sigma
\end{array} \right.
\end{eqnarray*}


Having two orthogonal idempotents makes us curious about their sum. Fortunately the sum is readily obtainable by observing that

\begin{eqnarray*}\left(\begin{array}{cc} G(\sigma,b)B\Delta_{11}^{-1} &
G(\sig...
...\bf b}^T\alpha(b)G(b,\sigma) \end{array} \right)
& = & {\bf 1}
\end{eqnarray*}


so that

\begin{eqnarray*}G(\sigma,b)B\Delta_{11}^{-1} {\bf a}^T\alpha(a)G(a,\sigma)\alph...
...{\bf b}^T\alpha(b)G(b,\sigma)\alpha^{~l}(\sigma)
& = & {\bf 1}
\end{eqnarray*}


and finally

\begin{eqnarray*}\alpha^{-1}(\sigma) & = & P_>(\sigma) + P_<(\sigma)
\end{eqnarray*}


or better

\begin{eqnarray*}{\bf 1}& = & P_>(\sigma)\alpha(\sigma) + P_<(\sigma)\alpha(\sigma)
\end{eqnarray*}


This result is necessary to verify explicitly that equation 7 gives a solution to the inhomogeneous equation, and it is also worthwhile to note that it is just the discontinuity of Green's matrix for coincident arguments. Such an irregularity is well known for scalar Green's functions; they are continuous up to a certain order, but then have a delta function discontinuity in their last derivative.

The derivation just given is valid for any system of equations, self-adjoint or not, and for any assignment of boundary conditions to one or the other of the two endpoints. Even so, the formulas have a fairly plausible interpretation. Denominators such as $\Delta_{11}^{-1}$ refer to the projection of a solution starting from one of the boundary points on the boundary condition, calculated from the adjoint equation, taken from the other endpoint. According to Green's formula this inner product will be the same no matter the interior point at which it is calculated, and it will vanish only when some initial value at one end gives a solution which meets the boundary condition at the other end. This is in accord with the dichotomy, that whenever the Sturm-Liouville problem has a solution, the inhomogeneous equation does not, and conversely.

When the system of equations is self-adjoint, the boundary values and the boundary conditions satisfy the same differential equation, even though they evolve from possible different initial conditions and hence would not coincide. Anyway, if more conditions were specified at one end than at the other, matrices of two different dimensionalities would be involved. Thus there is a further configuration of high symmetry, wherein half of the boundary conditions are specified at one end and half at the other. At each endpoint the antisymmetric matrix $\alpha$ is a metric matrix for a symplectic geometry, and it may happen that the boundary conditions lie in a maximal isotropic subspace for such a metric. In that case, boundary values would simultaneously serve as boundary conditions. The very highest symmetry would then arise when the two kinds of solutions were considered to be identical, with no distinction between initial value and initial condition.

Indeed, this is the situation most familiar to persons experienced with Green's functions, which are often visualized as products of functions meeting the respective boundary conditions and normalized to have a unit irregularity at their point of crossing. To further compare the derivation just given with a familiar situation, it might be noticed that the self-adjoint form of a second order differential operator

\begin{eqnarray*}-(pu')' + qu = \lambda u
\end{eqnarray*}


is

\begin{eqnarray*}\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)
...
...}\right)
\left( \begin{array}{c} u \\ pu' \end{array} \right)
\end{eqnarray*}


while the self-adjoint form of the one-dimensional Dirac equation:

\begin{eqnarray*}\frac{d}{dx} \left( \begin{array}{c} \phi_1 \\ \phi_2 \end{arra...
...
\left( \begin{array}{c} \phi_1 \\ \phi_2 \end{array} \right)
\end{eqnarray*}


would be:

\begin{eqnarray*}\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)
...
...
\left( \begin{array}{c} \phi_1 \\ \phi_2 \end{array} \right)
\end{eqnarray*}


For the Schrödinger equation the coefficient matrix $\gamma$ is degenerate, although still symmetric. The bilinear form in function space therefore depends only on the wave function and not on its derivative. For the Dirac equation, we have to use the sum of the squares of the two components, likewise a familiar result.

A self adjoint system of differential equations with canonical boundary conditions is particularly well suited to a discussion of the theory of singular differential equations, because the canonical boundary conditions are already compatible with the definition of the Weyl circle, or its generalization to a higher order system as a maximal isotropic subspace. The details of this generalization may be found in any of the standard references, or in Loebl's third volume [2]. Nevertheless, there is a further detail, the application of this theory to a doubly infinite interval, which has an interesting connection with Green's function, and which is well to bring out this time.

To overcome difficulties of normalizing functions which have a nonzero amplitude over all of an interval which tends to infinity, it is helpful to describe eigenfunction expansions in terms of a Stieltjes integral. For all finite intervals the distribution function of this integral is a step function, whose limiting behaviour is pertinent to the spectral classification of the differential system over an infinite interval. A system of 2p differential equations will allow an eigenfunction expansion of not only scalar functions defined over the solution interval, but even vector valued functions up to dimension 2p. An expansion formula would be expected to have the form -

\begin{eqnarray*}f(x) & = & \sum_{k=1}^\infty c_k u_k(x)
\end{eqnarray*}


with

\begin{eqnarray*}c_k & = & (f,u_k) = \int_a^bf^T(x)\gamma(x)u_k(x)dx
\end{eqnarray*}


In such a formula, the functions uk(x) would be the Sturm-Liouville eigenfunctions, ck their expansion coefficients, and f the arbitrary function to be expanded. However, the Sturm-Liouville functions are supposed to be orthonormal over the interval of representation, and thus of unit square integral. But for purposes of solving an initial value problem, we are more interested in using solutions normalized to a unit initial value. Probably we would use the unit antisymmetric matrix as an initial value if the system were self-adjoint and we wanted a canonical basis. Suppose that the vectors $\{\xi_{i1}\}$ form a maximal isotropic subspace at an initial point and that $\{\xi_{i2}\}$ are their canonical conjugates. Therefore we must have

\begin{eqnarray*}u_k & = & \sum \{r_{ki1}\xi_{i1} + r_{ki2}\xi_{i2} \}
\end{eqnarray*}


and by substitution

\begin{eqnarray*}f & = & \sum_{k=1}^\infty\sum_{r=1}^2\sum_{s=1}^2 (f,\xi_kr)\xi_{ks}.
\end{eqnarray*}


This formula is to be written as a Stieltjes integral

\begin{eqnarray*}f & = &
\sum_{r=1}^2\sum_{s=1}^2\int_{-\infty}^\infty (f,\xi_r)\xi_s d\rho_{rs}(\mu) \end{eqnarray*}


whose distribution function is a matrix. Its elements have discontinuities at the eigenvalues, of magnitude rkrrks, namely products of pairs of expansion coefficients.

One way of determining the spectral matrix is to expand some known function with the hope of isolating its coefficients. In the process, Green's formula may be used to reduce the integral over function space to a sum over the boundary values. Moreover, if the function chosen for expansion belongs to the Weyl surface, the terms which belong to the endpoints will drop out, freeing the formula from an explicit dependence on the endpoints. An implicit dependence remains, because the Weyl surface determines the Sturm-Liouville boundary conditions to be used, but even these vestiges disappear when real boundary conditions are invoked.

It is rare that the same function will belong to the Weyl surfaces at both ends of the two-sided interval, so that the expansion formula would best be applied to a composite function which solves the system of equations in each of two subintervals and has a joining discontinuity at an internal initial point. All integrals would be written as a sum of two parts, for each of which Green's formula would be individually valid.

Finally, it is slightly simpler to apply Green's formula to the Parseval equality rather than to the expansion formula. Taking all this into account, we begin by writing

\begin{eqnarray*}\int_a^bf^H\gamma f dx & = & \sum_{k=1}^\infty \vert(f^H\gamma u_k)\vert^2.
\end{eqnarray*}


From the left side:

\begin{eqnarray*}\int_a^bh^H\gamma f\ dx & = &
\int_a^b0^H\gamma f\ dx + \int_...
...f(a) + f^H\alpha f(b) - f^H\alpha f(0)}
{\lambda - \lambda^*}
\end{eqnarray*}


Similarly,

\begin{eqnarray*}\sum_{k=1}^\infty\vert(f^H\gamma u_k)\vert^2 & = & \sum_{k=1}^\...
..._k(b)-f^H\alpha u_k(0)\vert^2}
{\vert\lambda-\lambda_k\vert^2}
\end{eqnarray*}


As noted, a sequence of choices will simplify this equation. By supposing that f belongs to the Weyl surface at each endpoint,the terms $f^H\alpha f(b)$ and $f^H\alpha f(a)$ are eliminated. By using the f's to determine Sturm-Liouville boundary conditions at the endpoints, terms of the type $f^H\alpha u_k(b)$ or $f^H\alpha u_k(a)$ are eliminated. The resulting formula

\begin{eqnarray*}\frac{f^H\alpha f(0^+)-f^H\alpha f(0^-)}{\lambda - \lambda^*} &...
... u_k(0^-)-f^H\alpha u_k(0^+)}
{\vert\lambda-\lambda_k\vert^2}
\end{eqnarray*}


thus depends on the discontinuity in f at the origin, which is the point at which it does not satisfy the differential equation.

Since the spectral density matrix is divided naturally into quadrants, some algebraic maneuvering and a careful choice of f's is required to obtain the separate quadrants. If we take

\begin{eqnarray*}f_1 & = & \left. \begin{array}{ccc}
\phi + \psi M_a & & a \leq 0 \\
\phi + \chi M_b & & a \geq 0
\end{array} \right.
\end{eqnarray*}


we obtain the result

\begin{displaymath}\frac{{\rm Im}\ (M_a-M_b)^{-1}}{{\rm Im}\ (\lambda)}
= \int...
...nfty}^\infty \frac{d\rho_{11}(\mu))}{\vert\lambda-\mu\vert^2}
\end{displaymath} (9)

The choice of

\begin{eqnarray*}f_2 & = & \left. \begin{array}{ccc}
\phi M_a^{-1} + \psi & & ...
... 0 \\
\phi M_b^{-1} + \psi & & a \geq 0
\end{array} \right.
\end{eqnarray*}


leads to

\begin{displaymath}\frac{{\rm Im}\ (M_a(M_b-M_a)^{-1}M_b)}{{\rm Im}\ (\lambda)} ...
...nfty}^\infty \frac{d\rho_{22}(\mu))}{\vert\lambda-\mu\vert^2}
\end{displaymath} (10)

The off-diagonal block can be gotten from a consideration of $(f_1,\gamma f_2)$, which follows a similar expansion and leads to

\begin{displaymath}\frac{{\rm Im}\ ((M_b-M_a)^{-1}M_b)}{{\rm Im}\ (\lambda)} =
...
..._{-\infty}^\infty\frac{d\rho_{22}}{\vert\lambda - \mu\vert^2}
\end{displaymath} (11)

Finally, $\rho_{21} = \rho_{12}^H$.

It is an interesting result that these expressions are just the discontinuities in Green's matrix along its diagonal. Therefore we can say that the imaginary part of the discontinuity in the complex Green's matrix is the spectral density function, while the discontinuity in the real Green's matrix is merely ${\bf 1}$. This explains why the complex poles of Green's function and the poles of the spectral density are the same; both depend on the same denominator (Ma-Mb)-1.

By and large the generalization for a system of equations of Weyl's spectral theory provides the link which is needed between Hilbert space theory and differential equation theory. By the use of such devices as hyperspherical harmonics or separation of variables many quantum mechanical systems can be reduced directly to this form. At the same time, it might be expected that a fairly explicit theory could be developed directly for partial differential equations. In this respect, the use of functional analysis offers a good idea of what to expect. Nevertheless, the principal advantage of a concrete theory such as Weyl's would seem to be the lessons which it teaches us about the diversity of bases for function spaces which can occur in practice, and the danger of supposing that just one type of basis function - the bound state wave function - is typical of them all.

If Weyl's theory is capable of clarifying our philosophical understanding of quantum mechanics, we might go on to ask whether it has any merit as a numerical procedure? By studying the behavior of |Ma  Mb|| we have a single scalar function whose zeroes locate the eigenvalues over a finite interval, and whose limiting behaviour will give us some idea of the nature of the spectrum. It is of some advantage that Ma and Mb, can be obtained as solutions of a Ricatti equation, but equally disadvantageous if they have to be obtained from complex eigenvalues, because of the fourfold increase in real multiplications involved.

In summary, we have called attention to the vexing problem of explaining just what is the quantization condition for quantum mechanics, and indicated that the extension of Weyl's second order theory of differential equations to systems of equations can be given a particularly elegant formulation which does not seem to be mentioned in any of the common differential equations textbooks. There still remains the explicit demonstration of the correctness of this interpretation through the exhibition of the resolution of a variety of typical examples, which may have to be put off until a subsequent birthday.



 
next up previous
Next: Bibliography
Microcomputadoras
2000-10-11