next up previous contents
Next: Antisymmetric multilinear functions Up: Equivalence Relations Previous: Functions of cartesian products   Contents

Symmetric bilinear functions

Another traditional restriction concerns the symmetry of $f$ with respect to exchanging its arguments. If the arguments matter but their order does not, it could be said that

\begin{eqnarray*}
f(x, y) & = & f(y,x).
\end{eqnarray*}



In such a case only one of the linearity requirements would need to be given explicitly. Of course, such a switch supposes the that the same vector space is used for each argument.

Another possibility would be that changing the order would change the sign of the result. This assumption eventually leads to an axiomatic theory of determinants. But to stay with the symmetric alternative, the next step is to refer the function to a basis (for which a two dimensional space is sufficiently illustrative):

\begin{eqnarray*}
f(a e_1 + b e_2, c e_1 + d e_2) & = &
a c f(e_1, e_1) + a d f(e_1, e_2)
+ b c f(e_2, e_1) + b d f(e_2, e_2).
\end{eqnarray*}



This is nicely written as a matrix equation,

\begin{displaymath}\left[ \begin{array}[t]{cc} a & b \end{array} \right]
\left...
...ight]
\left[ \begin{array}{c}
c \\ d
\end{array} \right] \end{displaymath}

showing how hard it is for linear algebra to escape from matrix notation. The central matrix is determined exclusively by the basis, and is symmeric because $f$ was. Conversely, choosing the values of $f$ just for the basis fixes its values everywhere else, and so uniquely defines the function. A plausible choice is the Kronecker delta, making $f$ (the square of) Euclidean distance. Any other choice would be a metric matrix for some geometry, but this one gives the EuclideanEuclidean metric metric. Interestingly, an antisymmetric matrix (from an antisymmetric $f$) would imply a symplectic geometry with a symplecticsymplectic metric metric.

What would be reasonable requirements for $f$, yet not depending on a basis? To always be positive for repeated arguments, and never zero except for a pair of zero vectors, seems to be adequate. As a consequence,

\begin{eqnarray*}
f(x - y, x - y) & = & f(x, x) + f(y, y) - 2 f (x, y)
\end{eqnarray*}



would have to be nonnegative. Dividing by $\surd( f(x, x) f(y, y) )$, which would not be zero if neither $x$ nor $y$ were, we need conditions for an expression of the form $r + 1/r -s$ to be positive.

Note that $r + 1/r$ has a minimum value of $2$ at $r = 1$, for positive $r$, and a maximum at $-1$, for negative $r$. Thus a positive $s$ could never exceed $2$, nor a negative $s$ ever fall below $-2$, which incidentally translates into a form of diagonal dominance for the metric matrix (if there were one).

It might also inspire the trigonometrically minded to make up an angle by writing

\begin{eqnarray*}
f(x, y) & = & \surd( f(x, x) ) \surd( f(y, y) ) \cos (\theta).
\end{eqnarray*}



To pursue the idea of distance further, note that three of the four quantities in the expression for $f(x - y, x - y)$ are positive, so that if it were necessary to add some positive quantity to the right hand side to make them all positive, an inequality would result:

\begin{eqnarray*}
\vert f(x - y, x - y)\vert & \leq & \vert f(x, x)\vert + \vert f(y, y)\vert + 2 \vert f (x, y)\vert .
\end{eqnarray*}



This inequality would take a more familiar form if the distance $d(x, y)$ were defined as the positive root of $d^2(x, y) = f(x-y, x-y)$, $x - y$ were substituted for $x$ in the inequality, and likewise $y - z$ for $y$. The result,

\begin{eqnarray*}
d(x, z) & \leq & d(x, y) + d(y, z)
\end{eqnarray*}



is the triangle inequality required to complete the axioms for a distance, which altogether read: This all sounds like deriving geometry from vector algebra, rather than the other way around.

Why do we go to so much trouble to make up this bilinear functional, especially since we already have the dual space and linear functionals to work with? For one thing, there are the connections with geometry - distances, projections and the cosines of angles. For another, it is less dependent on a basis, which is crucial for vector spaces which may not have bases, such as when their dimension is no longer finite, and which abound in quantum mechanics and its applications.

Figure: The Reciprocal Basis as a Dual Basis
\begin{figure}\begin{picture}(290,210)(-60,0)
\epsffile{recip.eps}\end{picture}\end{figure}

Here is an illustration of a basis i, j and its reciprocal basis, ii, jj which would have been a dual basis except that inner products work on two copies of the same vector space, rather than on the (space, dual) pair.

Note the difference between contavariant components which are the coefficients used in linear combinations (parallel projections on the dual basis), and covariant components, which result from perpendicular projection on the basis itself.


next up previous contents
Next: Antisymmetric multilinear functions Up: Equivalence Relations Previous: Functions of cartesian products   Contents
Pedro Hernandez 2004-02-28