next up previous contents
Next: polar coordinates and Prüfer's Up: Complex Analysis Previous: the coefficient matrix as

Second order differential equations

Second order linear differential equations for functions of a real variable ore of common occurrence in physical and engineering applications. Some equations are of second order as a consequence of Newton's laws, wherein accelerations rather than velocities play the predominant role. Others result from separating variables in Laplace's equation or Poisson's equation, where sexond derivatives are a consequence of curvature in a variational principle.

Complex numbers inevitably figure in the solution of such equations because they unify the occurrence of sines and cosines, a situation which can be traced in turn to working with eigenvalues and eigenvectors of antisymmetric matrices. They require complex numbers for the same reason that complex numbers were introduced into the solution of algebraic equations in the first place.

Since a single second order equation begs the introduction of two first order equations, the solutions of those equations are conveniently represented by points in the real plane. There is no particular reason to consider those points as complex numbers, although that is sometimes done. However it is not a question of working with the real and imaginary parts of a single analytic function.

In contrast, it is often useful to work with second order differential equations in a single complex variable, even though its coefficients make it look like the same real equation which would have been supplanted by two equations in real variables. This time the supplementary variable is complex, requiring four dimensions to give everything the same treatment as before. Obviously whatever analytic gains are achieved in the process, they are compensated by much more restricted graphical and visual opportunities.

Much insight into complex equations can be gained by using the graphical insight obtained from real equations. Conversely, puzzling aspects of the solutions of real differential equations are often clarified by their behavior in the complex plane, especially via the influence of complex singularities on real behavior. That may take the form of an unsuspected singularity, or the existence of an unanticipated branch point.

Whatever their origin, consider a pair of linear first order differential equations whose constituents could be either real or complex:

$\displaystyle \frac{dy(s)}{ds}$ = a(s)y(s) + b(s)x(s) (281)
$\displaystyle \frac{dx(s)}{ds}$ = c(s)y(s) + d(s)x(s), (282)

which could be written in matrix form as

\begin{displaymath}\frac{d}{ds} \left[ \begin{array}{c} y(s) \\ x(s) \end{array}...
...ht]
\left[ \begin{array}{c} y(s) \\ x(s) \end{array} \right]. \end{displaymath}

To foresee some applications, the coefficient matrix for the Schrödinger equation takes the form

 \begin{displaymath}\left[ \begin{array}{cc} 0 & V(x)-E \\ 1 & 0 \end{array} \right]
\end{displaymath} (283)

for energy E, potential V(x), and position x as the independent variable. The dependent variables are $\Psi(x)$, the wave function, and $d\Psi(x)/dx$. On ther other hand, either a one dimensionad Dirac equation, or the radial part of a three-dimensional Dirac equation, employ a rather more symmetric version of the same matrix:

 \begin{displaymath}\left[ \begin{array}{cc}
0 & m - E + V(x) \\
m + E - V(x) & 0
\end{array} \right]
\end{displaymath} (284)

The components are the positive and negative energy components - electron and positron wave functions. The matrix has diagonal components when spin has to be included as well.

The plane whose cartesian coordinates are x and y is called the phase plane, on account of that's being its name in classical mechanics where x is position and y is momentum.



 
next up previous contents
Next: polar coordinates and Prüfer's Up: Complex Analysis Previous: the coefficient matrix as
Microcomputadoras
2001-04-05