Notes on Linear Algebra

Lee Lady


Mostly there are two themes in this collection of short hand-outs. First, that rather than think of an m by n matrix as a doubly-indexed array, it is often more enlightening to think of it as a n-tuple of columns (which are, of course, m-vectors) or an m-tuple of rows.
And secondly, that once one understands the method of solving systems of equations by elimination, essentially one knows the whole of the linear algebra covered in this (very stripped-down) course. Everything else, except for eigenvectors and eigenvalues, is just a matter of constantly restating the same facts in different language.

A lot of the files listed below are in PDF (Adobe Acrobat) format. Alternate versions are in DVI format (produced by TeX; see see here for a DVI viewer provided by John P. Costella) and postscript format (viewable with ghostscript.) Some systems may have some problem with certain of the documents in dvi format, because they use a few German letters from a font that may not be available on some systems. (Three alternate sites for DVI viewers, via FTP, are CTAN, Duke, and Dante, in Germany.)

 

Systems of Linear Equations in a Nutshell

(Click here for dvi format.)
(Click here for Postscript format.)

Instead of thinking of a systems of equations as constituting m equations in n unknowns, where all the coefficients are scalars, it can be more enlightening to think of it as a single equation in n unknowns where the coefficients (and constant term) are m-dimensional vectors.


How to Find the Inverse of a Matrix

(Click here for dvi format.)
(Click here for Postscript format format.)

Doing an elementary row operation on the left-hand factor A of a matrix product AB gives the same result as doing the same operation on the product matrix. Using this observation, it is easy to explain why the usual process for inverting a matrix works, and why the left inverse and the right inverse are identical.
This approach enables one to omit the topic of elementary matrices from the course.


Some Equivalent Statements

(Click here for dvi format.)
(Click here for Postscript format.)

Some Equivalent Characterizations of Basic Concepts

(Click here for dvi format.)
(Click here for Postscript format.)


A "Grammar Lesson" in Linear Algebra

(Click here for dvi format.)
(Click here for Postscript format.)

Some incorrect statements frequently found in student proofs.


The Pivotal Role of Zero in Linear Algebra

(Click here for dvi format.)
(Click here for Postscript format.)

Proving "If .... then" Statements


The Logical Structure of Proving Linear Independence

(Click here for dvi format.)
(Click here for Postscript format.)

Students seem to have enormous difficulty in learning the pattern for proving any statement that essentially reduces to an implication, such as proving that vectors are linearly independent or that a function is one-to-one. (This may also be a main source of the difficulty students have with proofs by induction.)
When asked to prove "If P, then Q," students will almost invariably begin by saying, "Suppose Q."
The logical analysis here was one of my attempts to clarify this type of proof for students. I don't know whether it actually helps or not.


The Column Space of a Matrix

(Click here for dvi format.)
(Click here for Postscript format.)

By definition, the column space of an m by n matrix A with entries in a field F is the subspace of Fm spanned by the columns of A. A close examination of the method of elimination shows that a basis for this space can be obtained by choosing those columns of A which will contain the leading entries of rows after A is reduced to row-echelon form. (The row echelon form of A shows which columns to choose, but the basis columns themselves must come from the original matrix A.)


Eigenvalues

(Click here for dvi format.)
(Click here for Postscript format.)

Suppose than an n by n matrix A has n linearly independent eigenvectors and let P be the matrix whose columns are these eigenvectors. Then the jth column of the product AP is readily seen to be equal to the jth column of P multiplied by the jth eigenvalue. If now we write Q for the inverse of P, it follows easily that QAP is a diagonal matrix with the eigenvalues on the diagonal.
This approach is not dependent on change-of-basis formulas.


Syllabus for Spring, 1996



[ Top of Page | Calculus | HOME ]
 

This page has been accessed times