Mostly there are two themes in this collection of short hand-outs.
First, that rather than think of an m by n
matrix as a doubly-indexed array, it is often more enlightening
to think of it as a n-tuple of columns (which are, of course, m-vectors)
or an m-tuple of rows.

And secondly, that once one understands the method of solving systems
of equations by elimination, essentially one knows the whole of
the linear algebra covered in this (very stripped-down) course.
Everything else, except for eigenvectors and eigenvalues,
is just a matter of constantly restating the same facts in
different language.

**
A lot of the files listed below are in
****
PDF (Adobe Acrobat) format.
Alternate versions are in
DVI format (produced by TeX;
see
see here for a DVI viewer
provided by
John P. Costella)
and postscript format (viewable with
ghostscript.)
Some systems may have some problem with certain of the documents
in dvi format, because they use a few German letters
from a font that may not be available on some systems.
(Three alternate sites for DVI viewers, via FTP,
are
CTAN,
Duke,
and
Dante, in Germany.)
**

(Click here for Postscript format.)

Instead of thinking of a systems of equations as constituting m equations in n unknowns, where all the coefficients are scalars, it can be more enlightening to think of it as a single equation in n unknowns where the coefficients (and constant term) are m-dimensional vectors.

(Click here for Postscript format format.)

Doing an elementary row operation on the left-hand factor A
of a matrix product AB gives the same result
as doing the same operation on
the product matrix. Using this observation, it is easy to explain why
the usual process for inverting a matrix works, and why the left inverse
and the right inverse are identical.

This approach enables one to omit the topic of elementary matrices from
the course.

(Click here for Postscript format.)

(Click here for dvi format.)

(Click here for Postscript format.)

(Click here for Postscript format.)

Some incorrect statements frequently found in student proofs.

(Click here for Postscript format.)

(Click here for Postscript format.)

Students seem to have enormous difficulty in learning the pattern for
proving any statement that essentially reduces to an implication, such as
proving that vectors are linearly independent or that a function is
one-to-one. (This may also be a main source of the difficulty students
have with proofs by induction.)

When asked to prove "If P, then Q," students will almost invariably begin
by saying, "Suppose Q."

The logical analysis here was one of my attempts to clarify this type of
proof for students. I don't know whether it actually helps or not.

(Click here for Postscript format.)

By definition, the column space of an m by n
matrix A with entries in a field F
is the subspace of F^{m} spanned by the columns
of A. A close examination of the method of elimination
shows that a basis for this space can be obtained by choosing
those columns of A which will contain the leading
entries of rows after A is reduced to row-echelon form.
(The row echelon form of A shows which columns to choose,
but the basis columns themselves must come from the original matrix
A.)

(Click here for Postscript format.)

Suppose than an n by n matrix A has n linearly independent
eigenvectors and let P be the matrix whose columns are these
eigenvectors. Then the jth column of the product
AP is readily seen to be equal to the jth column of
P multiplied by the jth eigenvalue. If now
we write Q for the inverse of P,
it follows easily that QAP is a diagonal matrix with the
eigenvalues on the diagonal.

This approach is not dependent on change-of-basis formulas.