next up previous contents
Next: Spectral Density Up: Quantization as an Eigenvalue Problem Previous: Differential Equation Theory

Symplectic Boundary Form

The real version of the bracket belonging to a second-order differential operator is the Wronskian of two solutions. For higher order operators it is a bilinear version of the corresponding Wronskian, which itself is multilinear. The most interesting property of the Wronskian of a self-adjoint operator is its constancy, which is a fundamental property shared by the real bracket, whatever the order of the differential operator. This result is so familiar that we might pass right over it without noticing that it is really an invariance principle, which asserts that the real bracket is a bilinear form which is invariant under the choice of a boundary point at which to evaluate it, so long as its arguments are two solutions belonging to the same value of the eigenvalue parameter.

Here, as elsewhere, we find that we must on occasion use both the complex as well as the real variant of the bracket, so that it is not possible merely to define the one and ignore the other. Indeed, considerable confusion may be avoided by expending the effort required to set down the properties of each one of them separately, to the extent that it is possible to think readily of them independently and iis as having their own individual characteristics.

Calculation of the complex bracket

\begin{eqnarray*}(\varphi,\psi) & = &
\frac{1}{\lambda-\lambda^*}\{[\varphi,\psi](b) - [\varphi,\psi](a)\}

furnishes the desired mapping between the Hermitian inner product of the solution space and the anti-Hermitian bracket in the boundary value space, once again presuming that both of its arguments belong to the same eigenvalue. The practical consequence of this relationship is that the natural adjunct of a Hermitian geometry in the solution space is a symplectic geometry in the boundary value space.

Symplectic geometry is rather similar to the orthogonal geometry of Euclidean space or the Hermitian geometry of complex space, with the exception of the antisymmetry of its metric matrix and the consequent self-orthogonality of all vectors. In place of an orthonormal basis we must construct a canonical basis, whose idiosyncracies are familiar to all persons who have worked with Poisson brackets and canonical coordinates in classical mechanics. When this contrast is borne in mind most activities can be carried out in their accustomed manner. These include use of the Gram-Schmidt process to construct a canonical basis, use of the Gram matrix or the Gram determinant to ascertain linear dependence, or construction of conjugates to aid in the isolation of components of a vector.

Characteristically, canonical coordinates partition space into two parts, in a fashion very reminiscent of the coordinates and momenta into which phase space is divided. For boundary value problems this splitting is particularly compatible with the most common specifications of separated Sturm-Liouville boundary conditions, wherein half of them are imposed at the left-hand boundary as initial conditions and the remaining half at the rightmost boundary point as terminal conditions.

Many occasions arise for comparing constraints imposed on a function at one point to those which have been imposed at some other point. The strongest motivation for these mutual comparisons is the circumstance that the initial-value problem is always uniquely solvable, providing the best conditions of reference for the solutions upon which other kinds of restrictions have been imposed. Two rather complementary procedures can be implemented to achieve such a comparison. One is to reduce a restriction at a given point to an equivalent restriction at the initial point, by solving the differential equation backwards and comparing constraints. The other is to extend a solution of the initial-value problem to the point of comparison. Comparisons are most satisfactorily made by selecting a set of standard initial conditions -- preferably a canonical set -- whose solutions can then be compared with the solution or with the intended constraints at any other point.

Invariance of the bracket means that the comparison can he obtained by doing nothing more than calculating the bracket between constraints at any given point and the values of the standard solution at the same point. This technique was used quite effectively by Kodaira [15] in forming a general theory of even-order differential operators.

The complex bracket is not positive definite, so that it has a nontrivial null space consisting of those boundary values f which satisfy the requirement

\begin{displaymath}[f, f]\; = \; 0

Recalling that a Minkowski space is one in which there is a symmetric but not definite metric, we see the null space of an antisymmetric metric as the analog of the light cone of a Minkowski space.

A null space is a homogeneous space, meaning that all the nonzero multiples of an element either belong to the space or not, but simultaneously. The sum of two null vectors is not necessarily null, so that the null space is not necessarily a linear space. For the characterization of the null space it is convenient to introduce a basis in the boundary value space; preferably this basis should consist of the local values of boundary functions. These in turn are best defined by a canonical set of initial conditions.

With respect to a basis an algebraic expression of the second degree is obtained for the coefficients of the null vectors; for second-order differential operators there results a circle in the complex plane. Higher order operators lead to higher dimensional ellipsoids which of course are more difficult to represent graphically.

To obtain this circle we introduce the definition

\begin{eqnarray*}f & = & \varphi + m \psi

which exploits the homogeneity of the null space to employ the single coefficient m, which will depend analytically on the eigenvalue $\lambda$ . The basis functions $\varphi$ and $\psi$ are required to meet some suitable initial conditions. As already noted, initial conditions are most conveniently expressed in terms of the real bracket and a canonical set of initial values. In the general case there would be a set of vectors $\{\alpha_i,\beta_i\}$ for which we would require

W[\varphi,\alpha_i] = 0, \;\; & W[\varphi,\beta_i] = 0
\end{array} \end{displaymath}

In turn, the assertion that they form a canonical basis consists in requiring that they fulfill the conditions

[\alpha_i,\alpha_j] = 0 ,\;\; &
...j] = 0 ,\;\; &
[\alpha_i,\beta_j] = \delta_{ij}
\end{array} \end{displaymath}

In the special second-order case, the requirement that

\begin{eqnarray*}W[\psi,g] & = & 0

is equivalent to stipulating that g be a multiple of $\psi$.

Written out in terms of the basis, the null space acquires the form

\begin{eqnarray*}[\varphi + m \psi,\varphi + m \psi]& = & 0

Expansion and some algebraic rearrangement yields

& = & \frac{W[\varphi,\psi]}{[\psi,\psi][\psi,\psi]^*}.

We readily enough recognize the equation of a circle, Cb, whose center is

\begin{eqnarray*}z_b & = & -\frac{[\varphi,\psi]}{[\psi,\psi]}

and whose radius is

\begin{eqnarray*}r_b & = & \frac{1}{\vert\ [\psi,\psi]\ \vert},

supposing that initial conditions have been chosen which will make the constant Wronskian in the numerator equal to one.

The somewhat curious result of all this is that the vital statistics of the null surface -- for second-order operators, the circle Cb in the complex plane -- depend on the integrability properties of the basic solutions of the differential equation ${\cal L}[\psi] = \lambda\psi$. Integrability enters through the intermediary of Green's formula, this time applied in the reverse direction to convert the brackets into parentheses

There are three noteworthy points

(1) Any point on the surface of Cb produces a function f whose norm depends only on the initial conditions at a. Therefore, $\parallel f \parallel$ must remain bounded as $b \rightarrow \infty$, as long as $m\ \epsilon\ C_b$,

(2) For b2>b1, $r_{b_2}\leq r_{b_1}$, thus as $b \rightarrow \infty$ the radius of Cb decreases monotonically Two cases are of interest (i) $r_b \rightarrow 0$ and (ii) $r_b \rightarrow \varepsilon > 0$. In case (i), $[\psi, \psi]$ diverges, and with it $(\psi, \psi)$ This is the case which Weyl called the limit point case, because the circle Cb must converge to a point In case (ii), $[\psi, \psi]$ is bounded by $1/\varepsilon$. By contrast this is called the limit circle case,

(3) Not only is the radius of the circle Cb a monotonic function of b, but the circles themselves each contain all their successors for increasing b, as can be shown by more carefully examining the inequality defining the interior of the circle.

The null surface therefore carries significant information about the square integrability properties of the various solutions of the differential equation For a second-order differential operator it can he inferred, according to the two cases of limit point or limit circle:

Limit Point. From initial conditions specified by a single bracket, in the form $[f,\psi]=1$, there is always one square integrable solution as $b \rightarrow \infty$, namely

\begin{eqnarray*}f & = & \varphi + m_\infty \psi

Any other solution not proportional to this one diverges, in particular, $\psi$ itself is never square integrable over the infinite range.

Limit Circle. Even in the limit as $b \rightarrow \infty$ all solutions are square integrable.

Unfortunately we still lack the information necessary for quantum mechanical problems, because the eigenvalue is necessarily complex, forcing us to seek limiting values of m as $\lambda$ approaches the real axis. Furthermore, in addition to hunting limiting values for m, we still have to investigate the limiting behavior of eigenfunction expansions.

The way in which a connection may be established between eigenfunction expansions in a finite interval and in an infinite interval is to use the theory of Sturm-Liouville systems. For any finite interval it is possible to find a complete orthonormal basis of functions satisfying whatever consistent homogeneous boundary conditions that we wish to impose at the two endpoints. Our concern has to be with the process of taking a limit as one endpoint moves to infinity.

next up previous contents
Next: Spectral Density Up: Quantization as an Eigenvalue Problem Previous: Differential Equation Theory