\documentclass[10pt]{amsart}
\input lactose
\input lac
\usepackage{boxedminipage}
\renewcommand{\baselinestretch}{1.25}
\begin{document}
\thispagestyle{empty}
\begin{raggedright}
\setlength{\parskip}{4pt}
\setlength{\parindent}{1.5em}
\setlength{\unitlength}{8mm}
\setlength{\fboxsep}{.25in}
\renewcommand{\bottomfraction}{.6}
\renewcommand{\floatpagefraction}{.6}
\renewcommand{\textfraction}{.2} % This is the default.
\setlength{\floatsep}{12pt plus 2pt minus 2pt} % Default
\setlength{\textfloatsep}{.3in plus .05in minus .1in}
\setlength{\intextsep}{.2in plus .1in minus .1in}
% \renewcommand{\floatfraction}{.85}
\setlength{\floatsep}{.3in plus .2in}
% \setlength{\frameboxrule}{1pt}
\begin{center}{\bfseries Convergence of Infinite Series in General \\
and Taylor Series in Particular}
\par\bigskip E.\,L. Lady
\par\medskip(\today)
\end{center}
\vspace{.3in}
\setlength{\parskip}{8pt}
\setlength{\parindent}{1.5em}
\section*{\textbf{Some Series Converge: The Ruler Series}}
At first, it doesn't seem that it would ever make any sense
to add up an infinite number of things.
It seems that any time one tried to do this,
the answer would always be infinitely large.
The easiest example that shows that this need not be true
is the series I like to call the ``Ruler Series:''
\[ 1 + \dfrac 12 + \dfrac 14 + \dfrac 18 + \dfrac 1{16} + \dfrac1{32}
+ \cdots \]
It should be clear here what the ``etc etc'' (...) at the end
indicates, but the $k\th$ term being added here
(if one counts~$1$ as the $0\th$ term
and~$1/2$ as the $1^{\text{st}}$, etc.) is $1/2^k$.
For instance the 10th term is $1/2^{10}=1/1024$.
If one looks at the sums as one takes more and more terms
of these series,
the sum is~1 if one takes only the first term,
$1\tfrac12$ if one takes the first two,
$1\tfrac34$ if one adds the first three terms of the series.
As one adds more and more terms, one gets a sequence of sums
\[ 1 \qquad 1\tfrac12 \qquad 1\tfrac34 \qquad 1\tfrac 78
\qquad 1\tfrac{15}{16} \qquad 1\tfrac{31}{32} \qquad
1\tfrac{63}{64}\quad\cdots \]
These numbers are the ones found on a ruler
as one goes up the scale from~1 towards~2,
each time moving towards the next-smaller notch on the ruler.
\setlength{\unitlength}{.08in}
\[ \begin{picture}(64,16)(0,-4)
\put(0,0){\line(1,0){64}}
\put(-.4,9){0}
\put(31.6,9){1}
\put(63.5,9){2}
\put(0,0){\line(0,1){8}}
\put(32,0){\line(0,1){8}}
\put(64,0){\line(0,1){8}}
\put(16,0){\line(0,1){6}}
\put(48,0){\line(0,1){6}}
\put(47,8){$1\tfrac12$}
\put(8,0){\line(0,1){4}}
\put(24,0){\line(0,1){4}}
\put(40,0){\line(0,1){4}}
\put(36,0){\line(0,1){3}}
\put(44,0){\line(0,1){3}}
\put(52,0){\line(0,1){3}}
\put(56,0){\line(0,1){4}}
\put(55,6){$1\tfrac34$}
\put(60,0){\line(0,1){3}}
\put(59,6){\small$1\tfrac78$}
\put(50,0){\line(0,1){2}}
\put(54,0){\line(0,1){2}}
\put(58,0){\line(0,1){2}}
\put(62,0){\line(0,1){2}}
\put(57,0){\line(0,1){1}}
\put(59,0){\line(0,1){1}}
\put(61,0){\line(0,1){1}}
\put(63,0){\line(0,1){1}}
\put(61,3.5){\Small$1\tfrac{15}{16}$}
\end{picture} \]
Once one sees the pattern, two things are clear:
\par(1) Even if one adds an incredibly large number of terms in this series, the sum never gets larger than~2.
\par(2) By adding enough terms,
the sum can be made arbitrarily close to~2.
We say that the series
\[ 1 + \dfrac 12 + \dfrac 14 + \dfrac 18 + \dfrac 1{16} + \dfrac1{32}
+ \cdots \]
\emph{converges}~to~2.
Symbolically, we indicate this by writing
\[ 1 + \dfrac 12 + \dfrac 14 + \dfrac 18 + \dfrac 1{16} + \dfrac1{32}
+ \cdots =2\,. \]
This notation doesn't make any sense if interpreted literally,
but it is common for students
(and even many teachers) to interpret this as meaning
``\emph{If one could} add all the infinitely many terms,
then the final sum would be~2.''
This, unfortunately, is not too much different from saying,
``\emph{If} horses could fly then riders could chase clouds.''
The fact is that horses cannot fly and one cannot add together
an infinite number of things.
Instead, one is taking the \emph{limit} as one adds more and more and more
of the terms in the series.
The fact that one is taking a limit rather than adding an infinite
number of things may seem like a fine point that only mathematicians
would be concerned with.
However certain things happen with infinite series
that will seem bizarre unless you remember
that one is not actually adding together all the terms.
\vspace{.3in}\section*{Some Series Diverge: The Harmonic Series}
The nature of the human mind
seems to be that we assume that the particular
represents the universal.
In other words, in this particular instance,
from the fact that the series
\[ 1 + \dfrac 12 + \dfrac 14 + \dfrac 18 + \dfrac 1{16} + \dfrac1{32}
+ \cdots \]
converges, one is likely to erroneously infer that \emph{all}
infinite series converge.
This is clearly not the case.
For instance,
\[ 1 + 2 + 3 + 4 + \cdots \]
is an infinite series that clearly cannot converge.
For that matter, the series
\[ 1 + 1 + 1 + 1 + 1 + \cdots \]
also does not converge.
These examples illustrate a rather obvious rule:
{\itseries An infinite series cannot converge
unless the terms eventually get arbitrarily small.}
In more formal language:
\begin{center}
\fbox{An infinite series
$a_0+a_1+a_2+a_3+\cdots$
cannot converge unless
$\lim_{k\to\infty}a_k=0$.}
\end{center}
\vspace{.3in}
The natural mistake to make now is to assume that
any infinite series where $\lim_{k\to\infty}a_k=0$
will converge.
Remarkably enough, this is not true.
For instance, consider the following series:
\[ 1 + \dfrac12 \ + \dfrac14 + \dfrac 14
\ + \dfrac 18 + \dfrac18 + \dfrac18 + \dfrac 18
\ + \dfrac1{16}+\cdots+\dfrac1{16}\ +\dfrac1{32}+\cdots\,. \]
The idea is that there will be~8 terms equal to~$\tfrac1{16}$,
16~terms equal to~$\tfrac1{32}$,
32~terms equal to~$\dfrac1{64}$, etc.
The $k^{\text{th}}$ term here (if we count~1 as the first)
is $1/{\gamma(k)}$,
where we define $\gamma(k)$ to be the smallest power of~2
which is greater than or equal to~$k$:
\begin{align*}&\gamma(2)=2\\
&\gamma(3)=\gamma(4)=4 \\
&\gamma(5)=\cdots=\gamma(8)=8 \\
&\gamma(9)=\cdots=\gamma(16)=16 \\
&\gamma(17)=\cdots=\gamma(32)=32 \\
&\text{etc.}
\end{align*}
(For a more formulaic definition,
we can define $\gamma(k)=2^{\ell(k)}$
with $\ell(k)=\lceil\log_2k\rceil$,
where $\lceil x\rceil$ is the \emph{ceiling} of~$x$:
the smallest integer greater than or equal to~$x$.
For instance, since $2^4 < 23 < 2^5$,
it follows that $\ln_2 23 = 4.***\dots$ and so
$\ell(23)=\lceil 4.***\rceil = 5$
and so $\gamma(23)=2^5=32$.)
Clearly in this series,
$\lim_{k\to\infty}a_k=0$.
On the other hand, we can see that the second term of the series is~$\tfrac12$, and the sum of the third and fourth terms
is also~$\tfrac12$, and so is the sum of the fourth through eight terms.
The ninth through sixteenth terms also add up to~$\tfrac12$,
as do the seventeenth through the thirty-secoond:
\[ \dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+
\dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+
\dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+ \dfrac1{32}+
\dfrac1{32}=\dfrac12\,. \]
We can see the pattern easily
by inserting parenthesis into the series:
\[ 1 + \dfrac12 \ +(\dfrac14 + \dfrac 14)
\ +(\dfrac 18 + \dfrac18 + \dfrac18 + \dfrac 18)
\ +(\dfrac1{16}+\cdots+\dfrac1{16})\
+(\dfrac1{32}+\cdots+\dfrac1{32})+\cdots\,. \]
The terms within each set of parentheses add up to $\tfrac12$.
Thus as one goes further down the series,
one keeps adding a new summand of~$\tfrac12$ over and over again.
\[ 1 + \dfrac12 + \dfrac12 + \dfrac12 + \dfrac12 +\cdots\,. \]
Thus, by including enough terms,
one can make the partial sum of this series
as large as one wishes.
Hence the series
\[ 1 + \dfrac1{\gamma(2)} +\dfrac1{\gamma(3)}+\dfrac1{\gamma(4)}
+\dfrac1{\gamma(5)}+\dfrac1{\gamma(6)}+\cdots \]
\textbf{does not converge}.
This example may not seem very profound,
but by using it, it is easy to see that
the \textbf{Harmonic Series}
\[ 1 + \dfrac 12 +\dfrac 13 +\dfrac 14 + \dfrac 15 +\dfrac 16
+ \dfrac17 + \dfrac18 +\dfrac19 \cdots \]
also
\textbf{does not converge.}
Despite the fact that the terms one is adding one
keep getting smaller and smaller,
to the extent that eventually they fall below the level
where a calculator can keep track of them,
nonetheless if one takes a sufficient number of terms
amd keeps track of all the decimal places,
the sum can be made arbitrarily huge.
In fact, the $k^{\text{th}}$ term of the Harmonic Series
is $\dfrac1k$.
If $\gamma(k)$ is the function we defined above,
then by definition $k\leq\gamma(k)$.
Thus
\[ \dfrac1k\ \geq \ \dfrac1{\gamma(k)}\,. \]
Thus the partial sums of the Harmonic Series
\[ 1 + \dfrac 12 +\dfrac 13 +\dfrac 14 + \dfrac 15 +\dfrac 16
+ \dfrac17 + \dfrac18 + \dfrac19 + \dfrac1{10}+ \cdots \]
are even larger than the partial sums of the series
\[ 1 + \dfrac12 + \dfrac14 + \dfrac14 +
\dfrac18 + \dfrac18 + \dfrac18 +\dfrac18
+ \cdots +\dfrac1{\gamma(k)}+\cdots \]
which, as we have already seen, does not converge.
Therefore the Harmonic Series must also not converge.
In fact, if we use parentheses to group the Harmonic Series we get
\[ 1 + \dfrac12 +\left(\,\dfrac13 + \dfrac 14\,\right)
+\left(\,\dfrac 15 + \dfrac16 + \dfrac17 + \dfrac18\,\right)
+\left(\,\dfrac1{9}+\cdots+\dfrac1{16}\,\right)
+\left(\,\dfrac1{17}+\cdots+\dfrac1{32}\,\right)
+\cdots\,, \]
and we can see that the group of terms within each parenthesis
adds up to a sum greater than $\tfrac12$,
making it clear that if one takes enough terms of the Harmonic
Series one can get an arbitrarily large sum.
(The parentheses here do not change the series at all;
they only change the way we look at it.)
\bigskip\section*{\textbf{The Geometric Series}}
The Ruler Series can be rewritten as follows:
\[ 1 + (\tfrac12) + (\tfrac12)^2 +(\tfrac12)^3 + (\tfrac12)^4
+\dots \]
This is an example of a \emph{Geometric Series:}
\[ 1 + x + x^2 + x^3 + x^4 + x^5 + x^6 +\cdots \]
If $-1k^2-k=k(k-1)$.
But we claim that the series $\displaystyle\sum_2^\infty\dfrac1{k(k-1)}$
converges.
This is because of the algebraic identity
\[\dfrac 1{k(k-1)}=\dfrac1{k-1}-\dfrac1k\,.\]
(For instance, for $k=3$, \ $\dfrac16=\dfrac12-\dfrac13$,
\ and for $k=8$,
\ $\dfrac1{56}=\dfrac17-\dfrac18$.)
When we look at the $k^{\text{th}}$
partial sum of the series
$\displaystyle\sum\dfrac{1\rule{0ex}{4ex}}{k(k-1)}$
with this identity in mind, we see that
\begin{align*}
\dfrac12 +\dfrac16 &+\dfrac1{12}+\dfrac1{20}+\cdots+\dfrac1{k(k-1)}\\[3pt]
&= \left(1-\dfrac12\right)
+\left(\dfrac 12-\dfrac 13\right)
+\left(\dfrac13-\dfrac14\right)
+\cdots+\left(\dfrac1{k-2}-\dfrac1{k-1}\right)
+\left(\dfrac1{k-1}-\dfrac1{k}\right)\,.
\end{align*}
\par
Now in this sum, each negative terms cancels with the following
positive term,
so that the entire sum ``telescopes'' to a value of
$1-\dfrac1k\Fstrut$.
Since $\lim_{k\to\infty}\dfrac1k=0$,
we see that the series converges to~1.
Use of the comparison test now shows that the series
$\displaystyle\sum\dfrac1{k^2}$ also converges.
\bigskip
In the same way, one can show that the series
$\displaystyle\sum\dfrac1{k^3}$ converges
by comparing it with the series
\[ \sum_{k=3}^\infty \dfrac1{k(k-1)(k-2)}\,. \]
One can show that this latter series telescopes,
although in a more complicated way that the previous example,
by using the formula (derived by using partial fractions)
\[ \dfrac1{k(k-1)(k-2)}
=\dfrac{\tfrac12}{k-2}-\dfrac1{k-1}+\dfrac{\tfrac12}k\,. \]
Thus one gets
\begin{align*}
\dfrac16 + \dfrac1{24} &+\dfrac1{60}+\dfrac1{120}
+\cdots +\dfrac1{k(k-1)(k-2)}+\cdots \\[5pt]
&= \left(\dfrac12 -\dfrac12+\dfrac16\right)
+\left(\dfrac14-\dfrac13+\dfrac18\right)
+\left(\dfrac16-\dfrac14+\mathbf{\dfrac1{10}}\right)
+\left(\dfrac18-\mathbf{\dfrac15}+\dfrac1{12}\right)
+\left(\mathbf{\dfrac1{10}}-\dfrac16+\dfrac1{14}\right)
+\cdots \\[2pt]
&=\dfrac12-\dfrac12+\dfrac14=\dfrac14 \,,
\end{align*}
since all the other terms cancel in groups of three.
(One group of three canceling terms is indicated in boldface.)
However this approach, while entertaining, is much more
work than what is required.
The series $\displaystyle\sum\Fstrut\dfrac1{k^3}$ obviously converges by
comparison to the series $\dfrac1{k^2}\Fstrut$,
since
\[ \dfrac1{k^3}\ < \ \dfrac1{k^2}\,. \]
In fact, this logic shows that
$\sum\dfrac1{k^p}$ converges whenever $p\geq2$.
(Using the Integral Test\,---\,not discussed in these notes\,---\,%
one can show that in fact
$\displaystyle\Fstrut\sum\dfrac1{k^p}$ converges if and only if $p>1$.)
\vspace{.3in}
\section*{\textbf{The Limit Comparison Test}}
The comparison test as it stands is extremely useful
and in fact is fundamental in the whole theory
of convergence of infinite series.
And yet, in a way, it almost misses the point.
In the comparison test, we are comparing the size of the
terms in a series of interest with the size of the
terms in a series that is known to converge or diverge.
However consider the following series:
\[ 5 + \dfrac54 + \dfrac59 + \dfrac 5{16} + \cdots + \dfrac5{k^2}
+\cdots\,. \]
Now we know that the series
$\Fstrut\displaystyle\sum\dfrac1{k^2}$ converges.
If we are rather stupid, we might think that this is not
very helpful, since $5/k^2$ is not smaller than~$1/k^2$.
Looking things this way, though, is not using our heads.
Obviously $\displaystyle\sum\dfrac5{k^2}$ will converge, and in fact,
its limit will be exactly five times the limit of
$\displaystyle\sum\dfrac1{k^2}$ (whatever that may be).
What the comparison test in its original form
fails to take into consideration is the following important principle:
\[ \begin{boxedminipage}{.9\linewidth}\parini
What counts is not how big the terms of a series are,
but how quickly they get smaller.
\end{boxedminipage} \]
Furthermore,
\[ \begin{boxedminipage}{.9\linewidth}\parini
The convergence or divergence of a series is not affected
by what happens in the first twenty or thirty or
one hundred or even one thousand terms.
The convergence or diverge depends only on the behavior
of the \textbf{tail} of the series.
Therefore if the tail of a series from a certain point on
is known to converge or diverge,
then the same will be true of the series as a whole.
\end{boxedminipage}\]
\medskip
Taking this into account,
we could tweak the comparison test in the following way.
\bigskip
\begin{center}
\begin{boxedminipage}{.8\linewidth}\parini
If~$\sum b_n$ is a positive series,
and $b_nCa_n$ for all~$n$
from a certain point on and $\sum a_n$ is known to diverge,
then $\sum b_n$ will also diverge.
\end{boxedminipage}
\end{center}
\bigskip
However one can get an even better tweak than this.
Consider, for example, the series
\[ 1+\dfrac23 +\dfrac3{11}+\dfrac4{31}+\cdots
+\dfrac{k+1}{k^3+k+1}+\cdots \,. \]
When $k$ is fairly large
(which is what really matters, since we need only look
at the tail of the series),
$(k+1)/(k^3+k+1)$ is very close to~$1/k^2$.
Thus it is tempting to compare this series to
$\sum\dfrac1{k^2}$, which is known to converge.
Working out the inequality is a bit of a nuisance, though,
and unfortunately it turns out that
$(k+1)/(k^3+k+1)$ is slightly larger than $1/k^2$:
just the opposite of what we need.
However, in light of the tweak mentioned above,
it would be sufficient to prove that, for instance,
\[ \dfrac{k+1}{k^3+k+1} \ < \ \dfrac{100}{k^2} \]
for large enough~$k$.
This is certainly true.
However it seems that one shouldn't have to work this hard.
Given that $(k+1)/(k^3+k+1)$ and $1/k^2$ are almost indistinguishable
for very large~$k$,
and that the tail of the series is all we care about anyway,
one would think that if one of the two series
$\sum(k+1)/(k^3+k+1)$ and $1/k^2$ converges,
then the other should also
(although not to the same limit),
and if one of them diverges, then they both should.
This is in fact the case.
Any time ${\lim_{n\to\infty} a_n/b_n=1}$,
then two \textbf{positive} series $\sum a_n$ and $\sum b_n$
will either both converge or both diverge.
In fact, if we now take into consideration
the tweak that we previously made to the limit comparison test
(i.\,e.~the observation that what really matters is not
how large the terms of a series are, but how fast they
get smaller, and that therefore a constant factor
in the series will have no effect on its convergence),
we get the following:
\begin{center}\begin{boxedminipage}{.9\linewidth}\parini
\textbf{Limit Comparison Test.}
Suppose $\sum a_n$ and $\sum b_n$ are \textbf{positive} series
and that $\lim_{n\to\infty}\dfrac{b_n}{a_n}$ exists
(or is $\infty$).
\begin{enumerate}
\item If $\Fstrut\lim_{n\to\infty}\dfrac{b_n}{a_n}<\infty$,
and if $\sum a_n$ is known to converge,
then $\sum b_n$ also converges.
\vspace{3pt}
\item If $\Fstrut\lim_{n\to\infty}\dfrac{b_n}{a_n}>0$ (or is $\infty$)
and $\sum a_n$ is known to diverge,
then $\sum b_n$ also diverges.
\end{enumerate}
\end{boxedminipage}\end{center}
Thus the only inconclusive cases are when
$\lim_{n\to\infty}b_n/a_n$ does not exist;
or when the limit is $\infty$ and $\sum a_n$ converges;
or the limit is~0 and $\sum a_n$ diverges.
When the limit is $\infty$,
the terms $b_n$ are so much larger than $a_n$
that $\sum b_n$ might possibly diverge even when
$\sum a_n$ converges.
And when the limit is~0,
the~$b_n$ are so much smaller than~$a_n$
that $\sum b_n$ might converge even when $\sum a_n$ diverges.
\bigskip\subheading{Proof of the Limit Comparison Test}
Let's suppose, say, that
$\lim_{n\to\infty} b_n/a_n=5$.
This says that if $n$ is large,
then $b_n/a_n$ is very close to~5.
Then certainly, for large~$n$,
$4 < b_n/a_n < 6$.
This says that, for large~$n$,
\[ b_n < 6a_n \qquad\text{and}\qquad b_n > 4a_n\,. \]
But then if $\sum a_n$ converges,
we conclude that $\sum b_n$ also converges,
using the tweaked form of the comparison test.
And if $\sum a_n$ diverges, then $\sum b_n$ also diverges,
for the same reason.
More generally, if
$\lim_{n\to\infty}b_n/a_n=\ell>0$,
and we choose positive numbers~$r$ and~$s$
such that
\[ 0< r \ < \ \ell \ < \ s \,, \]
then for large enough~$n$,
$b_n/a_n$ is so close to $\ell$ that
\[ r \ < \ \dfrac{b_n}{a_n} \ < \ s\,, \]
so that
\[ b_n < sa_n \text{\qquad and\qquad} b_n>r a_n \]
for all terms in the series from a certain point on.
It then follows from the tweaked form of the comparison test
that if $\sum a_n$ converges than so does $\sum b_n$
and if $\sum a_n$ diverges then $\sum b_n$ does as well.
Now consider the possibility that
$\lim_{n\to\infty}b_n/a_n=0$.
This would say that for large~$n$,
$b_n/a_n$ is very small, so certainly
$b_na_n$,
so if $\sum a_n$ diverges then $\sum b_n$ must also diverge.
\vspace{.5in}
\section*{\textbf{Mixed Series}}
If a series has both positive and negative terms,
it is called a \textbf{mixed series}.
The theory for mixed series is more complicated than
for positive or negative series,
since a mixed series can diverge even though it
is bounded both above and below.
In this case, we say that it \textbf{oscillates}.
A simple example of a series which oscilates is
\[ 1 - 1 + 1 - 1 + 1 - 1 + \cdots\,. \]
This series is both bounded above and bounded below,
since the partial sums never get larger than~1 or smaller
than~$-1$.
In this case, we find that as we take more and more terms,
the partial sums do in fact oscillate between the alternate values
$+1$ and~0.
As a practical matter,
most of the oscillating series one encounters
do tend to jump back and forth more or less in this way.
However technically,
any series which does not go to $+\infty$ or to $-\infty$
and does not converge
is called oscillating.
Below, we will distinquish below two different types
of convergence for mixed series:
absolute convergence and conditional convergence.
The possible behaviors for series
are described as follows:
\[ %\setlength{\extrarowheight}{4pt}
\begin{tabular}{ | l | l | l | }
\hline
Positive Series & Negative Series & Mixed Series \\[6pt] \hline
Converges absolutely & Converges absolutely & Converges absolutely \\[6pt]
Goes to $+\infty$ & Goes to $-\infty$ & Goes to $\pm\infty$\\[6pt]
& & Oscillates\\[6pt]
& & Converges conditionally\\[6pt]
\hline
\end{tabular} \]
For a mixed series, we can talk about the \textbf{positive part} of the
series, consisting of all the positive terms in the series,
and the {negative part}.
For instance, in the series
\[ 1 - \dfrac12 + \dfrac13 - \dfrac14 + \dfrac 15 -\cdots \]
the positive part is
\[ 1 + \dfrac13 +\dfrac15 +\cdots \]
and the negative part is
\[ \dfrac12 + \dfrac 14 + \dfrac 16 +\cdots \]
Notice that in writing the negative part,
we have taken the absolute value of the terms.
Thus we can write
\[\text{Whole Series}=\text{Positive Part}-\text{Negative Part}\,. \]
This is a little misleading, though.
It's not always true that in a mixed series the positive terms
and negative terms alternate.
So when we subtract two series,
it's not clear how to interlace the positive and negative terms.
For instance, in the example given,
we could misinterpret the difference as
\[ \text{Positive Part}-\text{Negative Part} =
1 + \dfrac13 -\dfrac 12+ \dfrac15 + \dfrac 17 +\dfrac19
- \dfrac14 +\dfrac1{11}+\dfrac1{13}+\dfrac1{15}+\dfrac1{17}
-\dfrac16+\cdots \]
where there are several positive terms for every negative term.
A little thought will show that as long as one keeps including
a negative term every so often,
all the negative terms will eventually be included in the series
so, paradoxically enough,
this new series actually contains the same terms as the original one
even though the positive terms are being used
more rapidly than the negative ones.
One's first impulse is to think that
changing the way the positive terms and negative terms
of a series are interlaced
shouldn't make any different to the limit,
since both series ultimately do contain the same terms.
However consideration of the partial sums seems to clearly indicate
that the two series do not have the same limit.
In fact, the second series
\[ \text{Positive Part}-\text{Negative Part} =
1 + \dfrac13 -\dfrac 12+ \dfrac15 + \dfrac 17 +\dfrac19
- \dfrac14 +\dfrac1{11}+\dfrac1{13}+\dfrac1{15}+\dfrac1{17}
-\dfrac16+\cdots \]
does not seem to converge at all,
whereas we shall see from the Alternating Series Test
below that the first one does.
This is the reason that one should not think of an infinite series
as merely a process of adding up an infinite number of terms.
Instead, it is a process of adding more and more terms
taken in a particular sequence.)
Here are the possibilities for a series with both positive and negative terms.
\begin{enumerate}
\item The positive part of the series and the negative part
both converge. In this case, the series as a whole must converge.
\item The positive part converges, but the negative part diverges.
In this case, the series as a whole must diverge.
More precisely, as one adds on more and more terms,
the result becomes more and more negative,
\ie the sum ``goes to $-\infty$.''
\item The positive part diverges and the negative part converges.
Once again, the series as a whole diverges. In this case,
it goes to $+\infty$.
\item The positive part and the negative part both diverge.
\textbf{In this case, anything can happen.}
\end{enumerate}
\[ \begin{tabular}{ l || c | c | }
\cline{2-3}
\rule{0ex}{6.2ex}& \shortstack{Positive part\\[4pt] Converges}
& \shortstack{Positive Part\\[4pt] Diverges} \\ \hline \hline
\vline\,\,\rule{0ex}{7ex}\shortstack{Negative part\\[3pt] Converges}
& \hspace{.5em}\shortstack{Series\\[4pt] Converges Absolutely}
& \shortstack{Series\\[4pt] Diverges}\\[2pt] \hline
\vline\,\,\shortstack{Negative part\\[4pt] diverges\\ \rule{0pt}{.7ex}}
& \shortstack{Series\\[4pt] Diverges\\ \rule{0pt}{.7ex}}
& \rule{0ex}{9.3ex} \shortstack{Series diverges \\[4pt]
or\\[3pt] Converges conditionally}
\\[2pt] \hline
\end{tabular} \]
At first, it seems very unlikely that a series can converge
if its positive and negative parts are both diverging.
What happens, though, is that as one adds more and more terms,
even though the positive terms alone add up to something
which eventually becomes huge,
and the negative terms add up to something which becomes hugely negative,
as one goes down the series
the two sets of terms keep balancing each other out
so that one gets a finite limit.
\begin{Sidebar}{Decimal Representations As Infinite Series}\parini
\setlength{\parindent}{1.5em}
\setlength{\parskip}{3pt}
\renewcommand{\baselinestretch}{1.0}\normalsize
% In explaining why every bounded increasing series
% must converge to some limit,
% I used the fact that real numbers can be represented
% as decimals.
% These explanation depended on two facts:
% (1)~The decimal representation for a real number can have
% (and usually does) an infinite number of decimal places.
% (2)~There is no restriction on the sequence of digits
% making up the decimal expansion of a real number.
% In particular, there is not a requirement
% that the digits should follow some repeating pattern.
% If either of these conditions were not true,
% then the explanation I gave would not be valid.
We usually take if for granted that a real number
is given in the form of a decimal.
But this leaves the problem of explaining
just exactly what we mean
by a decimal number which has infinitely many decimal places.
We can explain $3.48$, for instance, as a shorthand
for $\Fstrut\dfrac{348}{100}$.
\def\0{\hspace{.1em}}
But what is $1.23456789\0 10\0 11\0 12\0 13\dots$
a shorthand for?
It seems clear that a workable explanation
can only be given in terms of the limit concept.
The idea of an infinite series
is one way of giving such an explanation.
For instance, the decimal expanion for $\pi$,
\[ \pi = 3.14159265\dots \]
can be interpreted as the infinite series
\[ 3 + x + 4x^2 + x^3 + 5x^4 + 9x^5 + 2x^6 + 6x^7 + 5x^8 +\cdots \]
where $x=1/10$.
This is particularly useful in the case of decimals such as
\[ .001001001001\dots\,. \]
We can interpret this as
\[ \dfrac1{1000}(1 + x + x^2 + x^3 + x^4 +\cdots)\,,\]
with $x=\Fstrut\tfrac1{1000}$.
Since the expression in parentheses is a geometric series,
we can evaluate .001001.\dots as
\[ .001001001\dots = \frac1{1000}\,\dfrac1{1-x}
= \dfrac1{1000}\,\dfrac1{.999}=\dfrac1{999}\,. \]
From this, we can see that any repeating decimal
with the pattern $.xyzxyz\dots$ evaluates to
$xyz/999$.
For instance,
\[ .027027027\dots = 027\times .001001001 =
\dfrac{027}{999}=\dfrac1{37}\,. \]
It's great that infinite series give us a way
of actually explaining what a non-terminating decimal
really means.
On the other hand, there is a certain amount of
circular reasoning here.
We explain what it means for an infinite series to converge
by saying that it converges to a real number.
And then we explain what a real number is by thinking of it
in terms of its decimal expansion.
And now we explain what a decimal expansion is
by interpreting it as an infinite series.
This is enlightening, and sometimes useful,
but hardly adequate for a rigorous
foundation of mathematical analysis.
Also note that if wee take the method described above
for evaluating repeating decimals, and apply
it to $.999999999\dots$, we get
the apparently paradoxical (but true)
\[ .9999999999\dots = \dfrac{999}{999}=1\,, \]
so that in cases like this,
two different decimal expansions can correspond
to the same real number.
% One can search in vain for a fallacy here.
% There is none.
% It is in fact true that $.9999999\dots=1$.
% There are several ways of seeing this
% (aside from the one we have already given,
% which is completely valid).
% For one thing, $.99999\dots$ and $1.00000\dots$
% cannot be distinct, because there is no number
% that can possibly fit between them.
% Furthermore, it is clear that $.999999\dots$
% is closer to $1.000$ than $.999$.
% But the difference between $.999$ and $1.000$ is $.001$.
% Therefore the difference between $.999\dots$ and
% $1.000$ must be smaller than $.001$.
% But the same reasoning shows that this difference must
% be smaller than $.00001$. In fact, we have
% \[ .999 < .9999 < .99999 < .999999 < .9999999 <\cdots \leq 1.000 \]
% If we write this out in somewhat more algebraic notation, it read
% \[ 1-10^{-3} < 1-10^{-4} < 1-10^{-5} < \cdots < 1-10^{-n}
% < .99999\cdots \leq 1.00\,. \]
% From this, we can see that the difference between 1.000 and $.999\dots$
% mut be less than $10^{-n}$ for every positive integer~$n$.
% But the only non-negative number smaller than $10^{-n}$
% for every~$n$ is~0.
% Therefore we conclude that $1-.99999\dots=0$
% and thus $.999\dots=1$.
%
% The fact that two different decimals
% can represent the same real number
% shows that it is not satisfactory to think
% of a real number as being the same thing as a decimal.
\end{Sidebar}
\vspace{.3in}
\section*{\textbf{Alternating Series}}
In practice, mixed series are not usually as troublesome
as the discussion above would suggest.
This is because in most mixed series,
the positive and negative terms alternate.
In this case, what usually happens is that either
the series obvious diverges (in fact, oscillates)
because $\lim_{n\to\infty}a_n\=0$,
or else it converges (either absolutely or conditionally)
according to a very simple test,
which will be described below.
Consider the following series,
whose positive and negative parts both diverge.
\[ 1 -\dfrac 12 +\dfrac 13 -\dfrac 14+ \dfrac 15 -\dfrac 16 +\cdots \]
If one looks at what happens as one adds in more and more terms
of this series, one gets the following partial sums:
\begin{align*} &\ 1 \\[4pt]
1-\dfrac12 =\dfrac12&=\dfrac{30}{60} \\[4pt]
1-\dfrac12+\dfrac13 =\dfrac56&=\dfrac{50}{60} \\[4pt]
1-\dfrac12+\dfrac13-\dfrac14 =\dfrac7{12}&=\dfrac{35}{60}\\[4pt]
1-\dfrac12+\dfrac13-\dfrac14+\dfrac15 &=\dfrac{47}{60} \\[4pt]
1-\dfrac12+\dfrac13-\dfrac14+\dfrac15-\dfrac16 &=\dfrac{37}{60} \\[4pt]
\qquad\cdots \end{align*}
\setlength{\unitlength}{.08 in}
\[ \begin{picture}(60,16)(0,-8)
\put(0,0){\line(1,0){60}}
\put(-.7,-2.5){0}
\put(59.3,-2.5){1}
\put(60,0){\circle*{.7}}
\put(29.5,-2){$\tfrac12$}
\put(30,0){\circle*{.7}}
\put(49.5,-2){$\tfrac56$}
\put(50,0){\circle*{.7}}
\put(34,-2){$\tfrac{7}{12}$}
\put(35,0){\circle*{.7}}
\put(46,-2){$\tfrac{47}{60}$}
\put(47,0){\circle*{.7}}
\put(36,-2){$\tfrac{37}{60}$}
\put(37,0){\circle*{.7}}
\end{picture}
\]
Each new term being added on
has the opposite sign of the one before,
so it takes the sum in the opposite direction
to the previous one:
if the previous term was positive and moved the sum towards the right,
then the new term will be negative and move it towards the left.
As the fractions get more complicated here,
it becomes difficult to visualize
their relative positions on the number line,
but it's not hard to show that what happens
is that instead of getting large and larger,
the partial sums are jumping back and forth
within a smaller and smaller radius.
On the other hand,
the jump to the left will be smaller than the previous jump to the right,
because the terms~$a_k$ keep getting smaller (in absolutely value).
Since the partial sums of this series keep jumping back and forth
within a space whose radius is converging to~0,
one's intuition suggests that the series must eventually
converge to some limit.
Now anyone who goes through a calculus sequence
paying careful attention to the theory
will eventually realize the general principle
that intuition is very often wrong.
However in this case we have an exception.
Intuition is correct, and any series of this kind does converge.
A series which changes sign with each term\,---\,
\ie the sign of each term is the opposite of the sign of the preceding one\,---\, is an \textbf{alternating series}.
Not every mixed series is an alternating series,
but a lot of the most important ones are.
Alternating series are particularly nice because of the following:
\[ \fbox{\vtop{\kern 0pt\hbox{\textbf{Alternating Series Test:}
An alternating series will always converge }
\hbox{any time both the following two conditions hold: }
\kern 3pt
\hbox{(1) Each term is smaller in absolute value
than the one preceding it; }
\kern 3pt
\hbox{(2) As $k$ goes to $\infty$,
the $k\th$ term converges to~0. }
}} \]
Neither one of these two conditions
is adequate without the other.
For instance,
the series
\[ 1.1 - 1.01 + 1.001 - 1.0001 + 1.00001 - 1.000001 +\cdots \]
\noindent is alternating and clearly does not converge,
even though each term is smaller in absolutely value
than the preceding one.
On the other hand, the alternating series
\[ 1 - 1 + \dfrac12 - \dfrac15 +\dfrac13 -\dfrac1{25}+\dfrac15
-\dfrac1{5^3}+\dfrac16 -\cdot \]
fails the alternating series test because
\begin{align*}
&\dfrac13 \ \nleq \ \dfrac 15 \\
&\dfrac16 \ \nleq \dfrac1{5^3} \\
&\text{etc.}
\end{align*}
even though the $n^{\text{th}}$ term
does go to zero as $n$ increases.
A series like this might still converge,
but this particular one does not.
(The negative part converges and the positive part
diverges, so the series as a whole must diverge.)
\bigskip
An annoying thing about using infinite series
for practical calculuations is that even though you know
that by taking enough terms of a convergent series
you can get as close to the limit as you want,
in many cases it's not very easy to figure out
just exactly how many terms you'll need
to achieve some desired degree of accuracy.
But one of the nice things about series for which
the alternating series test applies
is that, since the partial sums keep hopping back and forth
from one side of the limit to the other
in smaller and smaller hops,
you can be sure that the error at any given stage
is always less than the size of the next hop:
i.\,e.~less than the absolute value of the next term in
the series.
\begin{boxedminipage}{.9\linewidth}\parini
Let
\[ a_1 - a_2 + a_3 - a_4 + a_4 +\cdots \]
be an alternating series satisfying the two conditions
of the alternating series test.
Then for any~$n$,
the difference between the partial sum
\[ a_1 - a_2 + \cdots \pm a_n \]
and the true limit of the series is always smaller
than $|a_{n+1}|$.
\end{boxedminipage}
\bigskip
In fact, for many alternating series that one actually
works with in practice,
once one goes a way out in the series,
the size of each term is not much different
from the size of the preceding one.
This means that as partial sums hop back and forth
across the limit, the forward hops and the backward ones
are roughly the same size.
This suggests that the true limit should lie roughly halfway between
any two successive partial sums.
In other words,
the error obtained by approximating
the series by the $n^{\text{th}}$ partial sum
will be roughly $|a_{n+1}|/2$.
(However one can easily cook up contrived examples
where this is not a good estimate.)
The frustrating thing here, though,
is that for some of the most well known alternating series,
this criterion shows that the error after a reasonable number
of steps is still discouragingly large.
For instance, the following alternating series
(derived from the Taylor series for the arctangent function)
converges to $\pi/4$:
\[ 1 - \dfrac13 + \dfrac15 - \dfrac17 + \dfrac19 +\cdots\, \]
One might hope that we could get a pretty accurate approximation
by taking 100 terms of this series.
But the $100^{\text{th}}$ term here will be $-1/101$,
and so the theorem above only guarantees us that after
taking 100 terms, the error will be smaller than
$1/103\approx .01$.
In other words, the theorem only tells us that
after taking 100 terms of the series,
we can only be sure of having an accurate result
up to the second digit after the decimal point.
If we want to be sure of accuracy up to the sixth digit
after the decimal point,
the theorem says that we would need to take
a million terms of the series.
Now mathematicians are not bothered by the idea
that one needs to take a million terms of a
series to get reasonable accuracy\,---\,they're
not going to actually \textbf{do} the calculation,
they're just going to \textbf{talk} about it.
For end users of mathematics, though\,---\,physicists,
engineers, and others who walk around
carrying calculators\,---\,this sort of accuracy
(or rather lack thereof) is anything but thrilling.
These people prefer to work with series where
one gets accuracy to at least a couple of decimal places
by taking the first two or three terms,
not half a million.
Of course if we use the idea that the actual error for the
alternating series that one usually encounters
is likely to be roughly half the next term in the series,
this would suggest that to get accuracy to the sixth place
after the decimal point in the above series for $\pi/4$,
one should really only need a \emph{half} of a million terms.
What a thrill!
However if it's really true that the limit for
the most typical alternating series
is often about halfway between two successive terms,
then we ought to be able to get much better accuracy
by replacing the final term in the partial sum by half that amount.
In other words, we could try a partial sum of
\[ a_1 - a_2 + a _3 -\cdots \pm a_{n-1} \mp\dfrac{a_n}2 \,. \]
Suppose we try this with the series for $\pi/4$,
this time taking only ten terms.
A calculation shows that
\[ 1 - \Fstrut\dfrac13 + \dfrac15 - \dfrac17 + \dfrac19 -\dfrac1{11}
+\dfrac1{13} - \dfrac1{15} + \dfrac1{17} - \dfrac12\cdot\dfrac1{19}
\approx .7868\,. \]
On the other hand, to four decimal places,
$\pi/4=.7854$.
So in this example, at least,
by tweaking the calculation we got accuracy up to an error
of roughly~.001 using only 10 terms of the series,
instead of needing five hundred thousand.
Tweaking an alternating series in this way is likely to often give
fairly good results when $a_{n+1}$ and $a_n$
are roughly the same size (at least for large~$n$),
although without a theorem to justify one's
method, one doesn't have guaranteed reliability.
On the other hand, consider the alternating series
\[ \sum_0^\infty \left(\dfrac{-1}5\right)^n
= 1 - \dfrac15 + \dfrac 1{25} - \dfrac1{125} + \dfrac1{625} -\cdots
+(-1)^n\dfrac1{5^n}+ \cdots\,. \]
Look at some partial sums for this series:
\begin{align*} 1 &= 1 \\
1 - \dfrac15 &= 1-.2=.8 \\
1 - \dfrac15 +\dfrac1{25} &= .8+.04=.84 \\
1 - \dfrac15 + \dfrac1{25} -\dfrac1{125} &= .84-.008 = .832 \\
1 - \dfrac15 +\dfrac1{25} - \dfrac1{125} +\dfrac 1{625}
&= .832 + .0016 = .8336\,.
\rule[-4ex]{0ex}{2ex}\end{align*}
Here $a_{n+1}$ is much smaller than~$a_n$
(in fact, $a_{n+1}=a_n/5$).
This is a geometric series and its limit is
$\dfrac1{1+\tfrac15\rule[-1.5ex]{0ex}{2ex}}
=\rule{0ex}{4.5ex}\dfrac56\,=.833333\dots.$
If we were to use
${a_0+a_1+a_2+a_3+\Fstrut\dfrac{a_4}2}$
as an approximation to the limit
we would wind up with a value of~.8328,
which is not nearly as good an approximation as
${a_0+a_1+a_2+a_3+a_4=.8336}$.
\bigskip
Obviously no series for which $\lim_{k\to\infty}a_k\=0$
can ever converge.
On the other hand, occasionally one will encounter an alternating series
where the successive terms do not
consistently get smaller in absolute value.
If a series like this does not converge absolutely,
it may be quite a problem figuring out
what happens.
\vspace{.3in}
\section*{\textbf{Absolute Convergence}}
For mixed series, we distinguish between two types
of convergence: \textbf{conditional} convergence
and \textbf{absolute} convergence.
The issue here is whether the terms of the series
get small so rapidly that it would converge
even if we ignored the signs,
or if the terms of the series get small slowly
but the series still converges
only because the positive and negative terms
remain in balance.
Let's restate this more carefully.
If we take a mixed series and make all the terms positive,
then we get the corresponding \textbf{absolute value series}.
The absolute value series is obtained from the original series
by adding together the positive and negative parts
instead of subtracting them.
\begin{align*}
\text{Original Series} &= \text{Positive Part}-\text{Negative Part} \\
\text{Absolute Value Series} &=\text{Positive Part}+\text{Negative Part}
\end{align*}
\textbf{Absolute convergence} means that the absolute value series
converges.
(Conditional convergence will be defined below
as meaning that the original mixed series
converges, but the corresponding absolute value series
does not.)
(For the record, we note the following trivial fact:
\textbf{Any positive series converges if and only if
it converges absolutely.
Likewise for a negative series.})
The definition does not explicitly say
that a series which converges absolutely actually does converge,
however this is in fact the case.
To see this, we can note the following important principle.
\begin{boxedminipage}{.9\linewidth}
If a positive series converges,
and a new series is formed by leaving out some of the terms
of this series,
then the new series will also converge.
\end{boxedminipage}
The reason for this is that saying that a positive series
converges is the same as saying it is bounded.
But leaving out some of the terms of a bounded series
can't possibly make it become unbounded.
Since the absolute value series
corresponding to an original mixed series
is the sum of the positive and negative parts
of the original series,
the above principle shows that the absolute value
series corresponding to a given series converges
if and only if the positive and negative parts
of the series both converge.
From this, we see the following:
\[ \fbox{If a series converges absolutely,
then it converges.} \]
\vspace{12pt}
The limit comparison test can sometimes be used to determine
whether an infinite series converges absolutely or not.
\begin{center}\begin{boxedminipage}{.9\linewidth}\parini
\textbf{New Limit Comparison Test}
Suppose $\sum a_n$ and $\sum b_n$ are
not-necessarily-positive series
and that $\Fstrut\lim_{n\to\infty}\left|\dfrac{b_n}{a_n}\right|$
exists (or is $+\infty$).
\begin{enumerate}
\setlength{\parskip}{4pt}
\item If
$\Fstrut\rule[-3ex]{0ex}{4ex}\lim_{n\to\infty}\left|\dfrac{b_n}{a_n}\right|
<\infty$,
and if $\sum a_n$ is known to converge absolutely,
then $\sum b_n$ also converges absolutely.
\item If
$\rule[-3ex]{0ex}{6ex}\Fstrut
\lim_{n\to\infty}\left|\dfrac{b_n}{a_n}\right|>0$
(or is $\infty$)
and $\sum a_n$ is known to not converge absolutely,
then $\sum b_n$ also does not converge absolutely.
\end{enumerate}
\end{boxedminipage}\end{center}
\medskip
As indicated above, a series can sometimes converge
even when its positive and negative parts both diverge
i.\,e.~without converging absolutely.
This can happen because as we go further
and further out in the series,
the positive and negative terms balance each other out.
For instance the alternating series discussed above,
\[ 1 - \dfrac12 +\dfrac13 -\dfrac14 +\dfrac15-\dfrac16+\cdots \]
converges but does not converge absolutely,
since the absolute value series is the divergent Harmonic Series.
\[ 1 +\dfrac12 +\dfrac13 +\dfrac14 +\dfrac15 +\dfrac16+\cdots \]
In a case like this, although the convergence is quite genuine,
it is also rather delicate,
since it depends on the positive and negative terms staying in balance.
If we were to rearrange the order of the series, for instance,
\[ \text{Positive Part}-\text{Negative Part} =
1 + \dfrac13 -\dfrac 12+ \dfrac15 + \dfrac 17 +\dfrac19
- \dfrac14 +\dfrac1{11}+\dfrac1{13}+\dfrac1{15}+\dfrac1{17}
-\dfrac16+\cdots \]
the new series would not converge,
since the positive terms would outweigh the negative ones.
And yet both series consist of the same terms,
only arranged in a different order.
(At first, one is likely to think that some negative terms will get
left out of the second series, since the positive terms are being
``used up'' much more quickly than the negative ones.
But in fact, every term in the original series, whether positive or negative, does eventually show up in the new one,
although the negative ones show up quite a bit further out
than originally.)
\begin{boxedminipage}{.9\linewidth}\parini
When a series converges, but the corresponding absolute value series
does not converge, one says that the series
\textbf{converges conditionally}.
\end{boxedminipage}
This term is unfortunate, because it leads students
to think that a series which converges conditionally
doesn't ``really'' converge.
It does quite genuinely converge, though,
as long as one takes the terms in the order specified.
Rearranging the order of terms, however,
will cause problems in the case of a conditionally converging series.
The rearranged series may diverge,
or it may converge to some different limit,
since rearranging changes the balance between positive and negative terms.
This is a key reason to remember than when one finds the limit
of an infinite series one is not actually adding up all the
infinite number of terms.
In evaluating an infinite series, one is doing calculus, not
algebra.
In algebra, one always gets the same answer
when adding a bunch of numbers,
no matter what order one adds them in.
In infinite series, the order of the terms
can effect what the limit is,
if the series converges conditionally.
In fact, there's a theorem due to Riemann
that says that by rearranging the terms of a series
which converges conditionally,
you can make the limit come out to anything you want.
\bigskip\subheading{Theorem [Riemann]}
By rearranging the terms of a conditionally convergent series,
you can get a series that diverges
or converges to any preassigned limit.
\proof
The point is that if a series converges conditionally,
then the positive and negative parts both diverge,
but they are in such a careful balance that the difference
between them converges to some finite limit.
Rearranging the series will affect this balance,
and with sufficient premeditation
one can make the limit come out to anything one wants.
Let's consider, for instance, the series
\[ 1 - \dfrac12 +\dfrac13 - \dfrac14 +\dfrac15 -\cdots \]
which is known to converge to $\ln 2$.
The positive part of this series
\[ 1 + \dfrac13 +\dfrac15 +\dfrac17 +\cdots \]
and the negative part
\[ \dfrac12 + \dfrac 14 +\dfrac 16 +\dfrac 18 +\cdots \]
both diverge.
This is an essential requirement for the trick we shall use
to work.
It is also essential to know that the limit
of the $n^{\text{th}}$ term as $n$ goes to infinity is~0.
This would always be the case,
otherwise the series could not converge even conditionally.
Now suppose we want to rearrange this series to get a limit
of, say, $-20$.
Considering the size of the terms we're working with,
$-20$ is a really hugely negative number,
but we could even go for $-500$, if we really wanted.
To start with, we'll use only negative terms of the series.
Since the negative part of the series diverges,
we know that by taking enough terms
\[ -\dfrac12 -\dfrac14-\dfrac16-\dfrac18-\dfrac1{10}-\dfrac1{12}-\cdots \]
we can eventually get a sum more negative than~$-20$.
It would be rather painful to calculate exactly how many terms
we'd need to get this, and it really doesn't matter,
but just out of curiosity
we can make a rough approximation.
Our guesstimate for the required number of terms
will be based on the fact that
the sum of the third and fourth terms here is numerically larger
than~$1/4$ (i.\,e.~larger in absolute value),
since $1/6$ is larger than $1/8$.
Likewise the sum of the next four terms is numerically larger
than~$1/4$, since
\[ \dfrac1{10}+\dfrac1{12}+\dfrac1{14}+\dfrac1{16} \ >
\dfrac1{16} + \dfrac1{16} +\dfrac1{16} +\dfrac1{16} = \dfrac14\,. \]
Continuing in this way,
the sum of the next eight terms after that is numerically larger than~$1/4$,
as is the sum of the next sixteen terms after that.
Now $-1/4$ is not a very negative number in comparison
to $-20$,
but by the time we get 80 groups of terms,
all less than $-1/4$,
we'll have a sum of less than~$-20$.
A careful consideration thus shows that
\[ -\dfrac12 -\dfrac14-\dfrac16-\cdots -\dfrac1{2^{80}} \ < \ -20\,. \]
Note that $2^{80}=\left(2^{10}\right)^8$,
and $2^{10}=1024$,
so that $2^{80}$ is not a whole lot bigger than
$(10^3)^8=10^{24}$.
Thus it looks like it will take roughly
an octillion negative terms to push the partial sum below~$-20$.
But nobody promised it was going to be easy!
After all, we're trying to produce a large number
(or rather an extremely negative one)
by adding up an incredible number of small ones.
At this point, we can now finally include a positive term,
so that the rearranged series so far looks something like
\[ -\dfrac12 -\dfrac14 -\dfrac16 -\dfrac18 -\cdots-\dfrac1{2^{80}} + 1\,. \]
Now this positive term will undoubtedly push the sum back up above
$-20$.
If not, we can include still more positive terms
until we achieve that result.
The crucial thing is that no matter how big a push we need,
there are enough positive terms to achieve that,
since we know the positive part of the series diverges.
Once we manage to push the sum above~$-20$,
we start using negative terms again to push it back down.
And once the sum is less than~$-20$,
we use positive terms again to push it back up to something
greater than~$-20$.
As we keep making the partial sums keep swinging back and forth
from one side of~$-20$ to the other,
it's essential to make sure that the radius of the swings
approaches~0 as a limit,
so that the new series actually converges to~$-20$.
We can accomplish this by always changing direction
(i.\,e.~changing from negative to positive or vice-versa)
as soon as the partial sum crosses~$-20$,
since in this case the difference between the partial sum
and~$-20$ will always be less than the absolute value
of the last term used,
and we know that the last term will approach~0
as we go far enough out in the series.
Now one can't help but notice
that in the above process one is using up the negative terms
at an extravagantly lavish rate,
and using positive terms at an extremely miserly rate.
So one's first thought is that either one eventually runs out
of negative terms,
or that sum of the positive terms never get included at all.
In the first place, though,
one never runs out of negative terms
because there are an infinite number of them available.
But do all of the positive terms eventually get used?
Well, choose a positive term and I'll convince you
that it does eventually appear in the new series.
Suppose, say, you choose $\dfrac1{201}$.
This is the $100^{\text{th}}$ positive term in the original
series.
Now if we never got to $\Fstrut\dfrac1{201}$ in the rearranged series,
this would mean that at most~99 positive terms from the original
series are being used for the new series.
But this means that in the new series,
the positive terms would add up to less than
\[ 1 + \dfrac13 +\dfrac15 +\cdots+\dfrac1{201}\,.\]
Now we don't need to worry about just exactly how big
that sum is,
because the point is that it's some finite number,
even if possibly somewhat large.
On the other hand, the negative part of the series diverges.
This would mean that our new series
would eventually start becoming indefinitely negative,
approaching~$-\infty$,
which would violate our game plan,
because once the partial sum is less than~$-20$,
we are supposed to use another positive term.
So the point is, if we follow the game plan
then we can be sure that $\Fstrut\dfrac1{201}$,
or in fact any of the positive terms,
does eventually occur.
\bigskip
Using variation on this method,
we can produce a rearranged series that diverges.
We start out as before, using negative terms
until the partial sum is less than~$-20$.
Then we use one positive term,
then use more negative terms until the sum is less than~$-40$.
Again we use one positive term,
then go back to negative ones until the sum is less than~$-60$.
Etc.~etc.
Once again, one can see that even though one seems to almost
never use any positive terms,
eventually all the positive terms of the original series
do get included in the rearranged series.
\bigskip
\subheading{Hilbert's Infinite Hotel}
Riemann's trick, as described above,
depends on properties of infinite sets
that mathematicians were only beginning to appreciate
in the Nineteenth Century.
Around the beginning of the Twentieth Century,
Hilbert explained this basic idea as follows:
Suppose that we have a hotel with an infinite number of rooms,
number $1,\, 2,\, 3,\, \cdots$, and all the rooms are full.
(This is a purely imaginary hotel, of course,
because infinity does not occur in the real world.
If modern physics is correct,
even the number of atoms in the whole universe,
although humongous, is still not infinite.)
Now suppose a new guest shows up.
In the real world, since all the rooms are full,
there would be no room for the new guest.
But in the infinite hotel,
one simply has the guest in room~1 move into room~2,
and the guest in room~2 move into room~3, and the guest
in room~4 move into room~5, etc.
Everytime one has the guest in room~$n$ move,
there is always a room~$n+1$ to move him into.
(One might even be able to get away with this in a real world
hotel if there were enough rooms.
Say there were a thousand rooms.
Before one got to room~1000, where there would be a problem,
surely someone would have checked out.)
Hilbert's Infinite Hotel is a little like a Ponzi scheme,
where one sells a worthless security to investors,
but keeps paying off the old investors by using
the money paid in by the new ones.
Ponzi schemes don't work because the real world is not infinite,
so eventually one runs out of new suckers, er, investors
to supply the necessary money.
One of the miracles of modern mathematics
is that it manages to use something that can't exist
in the real world\,---\,infinity\,---%
to achieve results that do work in the real world.
\vspace{.3in}
\bigskip\subheading{Absolutely Convergent Series (continued)}
To see that what happens
in conditionally convergent
series cannot happen for absolutely convergent ones,
let's first consider the case of a positive series.
In the case of a series whose terms are all positive,
one cannot affect whether the series converges or not
by rearranging the terms.
This is because a positive series converges if and only if it is bounded,
and you can't change whether a series is bounded or not
by taking the same terms in a different order.
Not only that, but you can't affect what the limit is
by rearranging the terms of a positive series.
To see why this is, let's consider the example of the Ruler Series:
\[ 1 + \dfrac 12 + \dfrac 14 + \dfrac 18 + \dfrac 1{16} + \dfrac1{32}
+ \cdots \]
The limit of this series is~2 and by the time
one has taken the first 8 terms,
the sum agrees with the limit to within an error smaller than~.01:
\[ 1 + \dfrac 12 + \dfrac 14 + \dfrac 18 + \dfrac 1{16} + \dfrac1{32}
+\dfrac1{64}+\dfrac1{128}=1\dfrac{127}{128}\,. \]
Now this means that all the terms from the 9\th one out
never add up to more than~.01
(in fact, never add up to more than 1/128),
no matter how many one takes.
Now suppose we take the same terms in some different order,
but without repeating any or leaving any out.
Eventually, if we go far enough out in the rearranged series
we will have to include all the first~8 terms
$1, \ 1/2, \ 1/4,\ 1/8,\ \dots,\ 1/128$ of the original series
(and probably many others as well),
because by assumption no terms are getting left out.
For instance, the beginning of the rearranged series might look like
\[ \dfrac14 +\dfrac1{512}+ \dfrac1{32} + \dfrac12 +
1 + \dfrac1{128} +\dfrac1{64}
+\dfrac1{1024}+ \dfrac 1{16} +\dfrac18
\,. \]
Now note that
\[ 2 \quad > \qquad \dfrac14 +\dfrac1{512}+ \dfrac1{32} + \dfrac12 +
1 + \dfrac1{128} +\dfrac1{64}
+\dfrac1{1024}+ \dfrac 1{16} +\dfrac18
\qquad > \quad 1\,\tfrac{127}{128}\,. \]
\noindent
(The first inequality is true because the limit of the original
series is~2 and the sum in the middle
does not have all the terms of the complete series.
The second inequality is true because the sum in the middle
is larger than the sum of the terms from 1 to~1/128.)
But at that point, whatever terms are left
will add up to less than~.01.
That means that eventually the sum of the rearranged terms
will be within .01~of the original limit~2.
So the new limit and the original limit must agree
to with a possible error of~.01.
But~.01 was nothing except an arbitrarily convenient
standard of accuracy.
By going far enough out in the series,
we could replace this by an desired small number.
Thus we can show that the limit of the rearranged series
and the limit of the original series agree
to within any conceivable degree of accuracy.
Thus they are the same.
The logic here applies to any series whose terms are all positive.
\[ \begin{boxedminipage}{6in}\noindent \parini
Rearranging the terms of a positive series
does not affect whether the series converges or not,
and if it does converge,
rearranging the terms does not affect what the limit is.
\end{boxedminipage} \]
Now, going back to mixed series,
what we see is that if we rearrange a series,
this will not affect the limit of the corresponding
absolute value series.
Therefore if the mixed series in question
converges absolutely,
no matter how one rearranges the terms,
the new series would still converge absolutely
and thus could not diverge.
And furthermore, even when an absolutely convergent series is rearranged,
if one goes far enough out in the series
then all the positive terms which are still left
as well as all the negative terms still left
will only add up to something extremely small.
In fact, the same of the terms at the end
can be made arbitrarily small
by going far enough out in the series.
Thus essentially the same logic
as given above for positive series applies to show that
\[
\begin{boxedminipage}{6.1in}\noindent\parini
Rearranging the terms
of an absolutely convergent series
does not affect whether the series converges or not,
and if it does converge,
rearranging terms does not affect what the limit is.
\end{boxedminipage} \]
\vspace{.3in}
\section*{\textbf{The Ratio Test}}
Consider the series
\[ 72+36+12+3+\dfrac35+\dfrac1{10}+\dfrac1{70}+\dfrac1{560}+\cdots \]
The pattern here is that the second term is one-half the first term,
the third term is one-third the second term,
the fourth term is one-fourth the third term,
and for all~$k$, \,$a_k=a_{k-1}/k$.
The numbers in this series are fairly large, which makes it seem
unlikely that the series converges.
However we can notice that since $\dfrac1k<\dfrac 12$ for $k>2$,
each term of this series after the second
is smaller than one-half the preceding term,
so that
\[ 72+36+12+3+\dfrac35+\dfrac1{10}+\dfrac1{70}+\cdots
<72\,(1+\dfrac12+\dfrac14+\dfrac18+\cdots+\dfrac1{2^k}+\cdots\,)
=72\times2, \]
showing that this series converges by the comparison test
with respect to the geometric series for~$x=1/2$.
(The comparison test is valid since the series is positive.)
In general, suppose that we have a \textbf{positive} series
\[ a_0+a_1+a_2+a_3+\dots \]
such that for all~$k$,
$a_{k+1}/a_kr$ for some positive number~$r$
with $r\geq 1$,
then clearly the series cannot converge.
(In this case, the terms aren't even getting smaller.)
This is the crude form of the \textbf{ratio test.}
There are two pitfalls to the crude form of the ratio test.
First, the reasoning here does not justify
applying it to series which are not positive,
since the comparison test only works for positive series.
(We will later see a way around this restriction.)
Secondly, it is not enough to merely know that
$a_k/{a_{k+1}}<1$ for all~$k$.
One must have a positive number~$r$ \textbf{strictly smaller} than~1
which is independent of~$k$
such that $a_{k+1}/a_k200$ we have
$a_{k+1}=100 a_k/k < \tfrac12 a_k$,
so that the ratio test shows that the series does in fact converge.
(In fact, it converges fairly rapidly, once one gets past
the first hundred terms or so,
which are indeed huge.)
The second tweak is even more powerful,
despite being essentially a special case of the first.
\fbox{\parbox{.88\linewidth}{\parini
One can actually take the limit of $a_{k+1}/a_k$
as $k$ gets larger and larger.
If this limit is \textbf{strictly} smaller than~1,
then the series converges.}}
To see why this is so,
suppose, for instance, that
$\lim_{k\to\infty}\dfrac{a_{k+1}}{a_k}=.9$.
This means that by taking~$k$ large enough,
we can make $a_{k+1}/a_k$ arbirarily close to~.9.
For instance, from some point on,
all the values of $a_{k+1}/a_k$ will lie within
a distance smaller than~.05 from~.9.
Restated, this says that
\[ .85 < \dfrac{a_k}{a_{k+1}} < .95 \]
from some point in the series on.
But this means that the series converges by the ratio test
with $r=.95$.
In more general terms,
the reasoning is that if $\lim_{k\to\infty}\dfrac{a_{k+1}}{a_k}=\ell$
and $\ell < 1$, then there exist a real numbers~$r$ between $\ell$ and~1.
Furthermore, if we write $\epsilon=r-\ell$,
then $\epsilon>0$ and by definition of the concept of limit,
whenever~$k$ is large enough then
\[ \ell-\epsilon < \dfrac{a_{k+1}}{a_k} < \ell +\epsilon = r\,. \]
But then since $r<1$, the series converges by the crude form
of the ratio test.
The flip side of the ratio test also works.
Namely, if $\lim_{k\to\infty}\dfrac{a_{k+1}}{a_k}>1$,
then the series diverges.
This is because
if $\lim_{k\to\infty}\dfrac{a_{k+1}}{a_k}=\ell>1$,
and if $s$ is any number such that
$\ell>s>1$,
then applying the same kind of reasoning as above
it can be seen that $a_{k+1}/a_k>s$ for all~$k$
from a certain point on.
But since~$s>1$, this says that for all~$k$ from a certain point on,
$a_{k+1}>a_k$, so surely the series cannot converge.
(Note that this reasoning can be applied even if the series
is not positive, provided we look at
$\lim_{k\to\infty}|a_{k+1}/a_k|$.)
It turns out the the ratio test
works even for series which are not positive,
if we consider $\Fstrut\lim_{k\to\infty}|a_{k+1}/a_k|$.
If this limit is strictly smaller than~1,
this will in fact show that the series
$|a_0|+|a_1|+ |a_2| + |a_3| + |a_4| +\cdots$
converges,
so that the original series converges absolutely,
and thus converges.
On the other hand,
if $\Fstrut\lim_{k\to\infty}|a_{k+1}/a_k|>1$,
this means that $|a_{k+1}|>|a_k|$ for all large~$k$,
so certainly the original series can't converge.
\fbox{\vbox{\kern 1pt\hbox{\textbf{Ratio Test. }
If $a_0+a_1+a_2+\cdots$ is an infinite series,
let $\ell=\lim_{k\to\infty}\dfrac{|a_{k+1}|}{|a_k|}$.}
\hbox{If $\ell <1 $ then the series converges
and if $\ell>1$ then the series diverges.}
\kern 1pt
}}
The ratio test is a marvelous trick
because it enables one to decide whether a vast number of series
converge or not without doing any hard thinking,
provided that one can compute $\ell$,
which is often not very difficult.
(In fact, it's so powerful that maybe it should be outlawed
for students.\grin\,)
The only drawback to the ratio test
is that it doesn't give any information in case $\ell=1$.
In this case, the decision could go either way.
Consider, for instance, the following three series:
\begin{align*} &1 + 2 + 3 + 4 + \cdots \\[2pt]
&1+ \dfrac12 +\dfrac13 +\dfrac14 +\cdots \\[5pt]
&1 + \dfrac14 +\dfrac19 +\dfrac1{16}+\cdots+\dfrac1{k^2}+\cdots
\end{align*}
The first series obviously diverges,
and the second is the Harmonic Series,
which also diverges.
The third series is known to converge.
And yet for all three of these series,
$\ell=\lim_{k\to\infty}a_{k+1}/a_k=1$.
\vspace{.4in}
\begin{center}
\textbf{POWER SERIES}
\end{center}
The idea of a \textbf{power series}
is a variation on the geometric series.
Instead of just considering the geometric series
\[ 1 + x + x^2 + x^3 + x^4 + \cdots\,, \]
one can allow the powers of~$x$ to have coefficients:
\[ a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 +\cdots\,. \]
As a simple example, suppose we set
$a_n=3^n$ for all~$n$.
Then
\[ \sum_0^\infty a_nx^n=\sum_0^\infty 3^nx^n=\sum_0^\infty(3x)^n\,.\]
This is just the Geometric Series
for the variable $3x$,
hence it converges for to $1/(1-3x)$ for $|3x|<1$,
i.\,e.~for $-\tfrac13|x_0|$.
\[\begin{boxedminipage}{.92\linewidth}\parini
The set of~$x$ where a power series $\sum a_nx^n$ converges
is either the whole real line~$\R$,
or consists of the single point~$\{0\}$,
or is an interval centered around~0, whether open, closed, or half-open.
(All these possibilities do occur.)
\par\vspace{6pt}
If \ $\sum a_nx_0^n$ converges conditionally or
$\lim_{n\to\infty}|a_nx_0^n|<\infty$ but
$\sum a_nx_0^n$ diverges, then
$x_0$ is one of the two endpoints bounding the interval
where $\sum a_nx^n$ converges.
\end{boxedminipage} \]
In some cases, this fact enables us to instantly see
for what values of~$x$ a series converges and for what values
it diverges.
Consider the series
\[ x -\dfrac{x^2}2 +\dfrac{x^3}3-\dfrac{x^4}4 +\dfrac{x^5}5+\cdots\,. \]
For $x=1$ this converges by the alternating series test.
But it does not converge absolutely,
since the corresponding absolute value series
\[ 1 +\dfrac{1}2 +\dfrac{1}3+\dfrac{1}4 +\dfrac{1}5+\cdots \]
diverges.
Therefore, the principle we have claimed above
this enables us to conclude that the series
converges for all~$x$ with $|x|<1$ and diverges for all~$x$
with $|x|>1$.
As we have seen,
the series converges for $x=+1$,
and it diverges for~$x=-1$ because in this case
it becomes the negative of the Harmonic Series.
It remains to see why this extremely useful principle is true.
It is in fact a consequence of an even more far-reaching principle.
\subheading{Proposition [Abel]}
Suppose that there is an upper bound~$B$ such that
$|a_nx_0^n|**|x_1|$.
\proof If $|x|>|x_1|$ then
$\sum a_nx^n$ can't converge,
because otherwise by the Proposition
(applied with~$x$ playing the role of~$x_0$),
$\sum a_nx_1^n$ would converge absolutely,
contrary to the assumption.\qed
\bigskip
\section*{\textbf{Radius of Convergence}}
We started out by asserting that if
$\sum a_nx^n$ converges conditionally for a certain value~$x_0$
of~$x$, or if $\sum a_nx_0^n$ diverges but
$\lim_{n\to\infty}|a_nx^n|<\infty$
then the series $\sum a_nx^n$ converges absolutely
for all~$x$ with $|x|<|x_0|$
and diverges for all $x$ with $|x|>|x_0|$.
Clearly this assertion is included in the statements
of the preceding Proposition and its Corollary.
In fact, though, we get an even more general result.
What we see from the proposition and its corollary
is that the set of values~$x$ where $\sum a_nx^n$ converges
must be a interval
(whether open, closed, or half-open) centered at the origin.
The only exceptions are the cases when the series converges
for all real numbers~$x$,
and the case where it diverges for all~$x$
except $x=0$.
In other words:
\bigskip\subheading{Theorem} For a power series
$\sum a_nx^n$, there are only three possibilities.
\par (1) The series converges for all values of~$x$.
\par (2) The series converges only when $x=0$.
\par (3) There exists a positive number~$R$
such that $\sum a_nx^n$ converges absolutely
whenever $|x|R$.
In case (3), the number~$R$ is called the
\textbf{radius of convergence} of the series.
In case~(1), we say that the radius of convergence is~$\infty$,
and in case~(2) we say that it is zero.
\medskip
If $\sum a_nx^n$ has a finite non-zero radius of convergence~$R$,
then the convergence of the series is roughly the same
as the convergence
of the series~$\displaystyle\sum_0^\infty\dfrac{x^n}{R^n}$,
which is a simple variation on the Geometric Series.
The only difference is that $\sum a_nx^n$ may converge
when $x=R$ or $x=-R$ or both,
whereas $\displaystyle\sum_0^\infty\dfrac{x^n}{R^n}$
diverges for both these values.
If $R$ is the radius of convergence for $\sum a_nx^n$,
then $\lim_{n\to\infty}|a_n|R^n$ can be zero, infinity,
or any positive number, or may not exist at all.
Examples of these four possibilities are as follows:
\begin{align*}
1 + x + x^2 + x^3 +\cdots \qquad & a_n=1, \quad R=1, \quad
\lim_{n\to\infty}a_nR^n=1 \\[2pt]
x + 2x^2 + 3x^3 + 4x^4 +\cdots \qquad & a_n=n, \quad R=1, \quad
\lim_{n\to\infty}a_nR^n=\infty \\
x+ \dfrac{x^2}2 + \dfrac{x^3}3 +\cdots \qquad & a_n=\dfrac1n,
\quad R=1, \quad
\lim_{n\to\infty}a_nR^n=0 \\[6pt]
x + 3x^3 +5x^5 +7x^7 +\cdots \qquad &
a_n=\begin{cases}n \text{ for $n$ odd } \\
0 \text{ for $n$ even }
\end{cases}
\quad R=1, \quad
\lim_{n\to\infty}a_nR^n \text{ does not exist.}
\end{align*}
\medskip
Suppose now that $\lim_{n\to\infty}|a_n|R^n$ does exist (or is infinity).
The nice thing is that, according to the Proposition above,
if $|x|>R$, then $\lim_{n\to\infty}|a_nx^n|=\infty$,
which makes it really easy to see that the series diverges in this case.
Obviously if $|x|0$ such that $\lim_{x\to \infty}a_nx^n=0$
and those such that $\lim_{n\to\infty}|a_nx^n|=\infty$.
It would be simplistic to say that as~$n$ approaches infinity,
$|a_n|$ becomes roughly (or ``asymptotically'')
comparable to a constant multiple of $(1/R)^n$.
However what is true is that
\[ \lim_{n\to\infty}\dfrac{a_{n}}{a_{n+1}}=R \]
or else the limit is undefined.
It is also true that
\[ \lim_{n\to\infty} \sqrt[n]{|a_n|}=R \]
if the limit exists at all.
These two formulas make it fairly easy to compute the radius of
convergence for most power series.
The most frequent case where the two limits above do not exist
is when the series skips powers of~$x$.
For instance the series for the sine function
contains only odd powers of~$x$:
\[ \sin x = x -\dfrac{x^3}{3!}+\frac{x^5}{5!}+\cdots \]
Clearly, then, the above two limits do not exist.
One can get around this however by re-writing the series as
\[ x\left( 1-\dfrac{x^2}{3!}+\dfrac{(x^2)^2}{5!}
-\dfrac{(x^2)^3}{7!}+\cdots \right) \]
and then regarding what's inside the parentheses
as a series in the variable~$x^2$.
If one writes $b_n$ for the coefficients of this series
(\ie $b_0=1$, $b_1=-1/3!$, $b_2=1/5!$, etc.)
then one gets a radius of convergence
for this series as
$\displaystyle S=1/\lim_{n\to\infty}\dfrac{b_{n+1}}{b_n}$.
Thus the original series converges for
$|x^2|**~~R$.
In case (3), the number~$R$ is called the
\textbf{radius of convergence} of the series.
It is usual to think of a complex number
$c+id$ as corresponding to the point in the Euclidean plane
with coordinates~$(c,d)$.
In terms of this representation,
we see that the power series converges for all points~$x$
strictly inside the circle around~0 with radius~$R$,
and diverges for all points strictly outside that circle.
For points actually on the circle itself,
convergence is a delicate question.
In case~(1), we say that the radius of convergence is~$\infty$,
and in case~(2) we say that it is zero.
\vspace{.2in}
\section*{\textbf{Why Use Complex Numbers?}}
One might wonder why anyone would ever want to do calculus
using complex numbers.
There are three reasons why this can be worthwhile.
In the first place, even though complex numbers do not occur
in most problems from ordinary life,
there are many places in science where they are quite useful.
This is especially true in electrical engineering,
where they are a standard tool for the study
of \textsc{ac} circuits.
Secondly, complex numbers enable one to considerably simplify
certain sorts of calculations even when one only really cares
about real numbers.
And third, in a lot of ways calculus simply makes more sense
when one uses complex numbers
and it becomes a much more beautiful subject.
Although there is a sizable literature on the philosophy of mathematics
(very little of which I have read),
I don't know of anyone who has ever tried to discuss precisely
what we mean when we talk about ``beauty'' in mathematics.
But it seems to me that one of the things that makes
a particular piece of mathematics beautiful is
the existence of an unexpected orderliness ---
a nice pattern where one would have expected only chaos.
In fact, I think that one of the things that attracts
people to the study of mathematics
is that in many parts of mathematics one finds a world
that is orderly, a world where things \emph{make sense}
in a way that things in the real world rarely do.
(And yet the study of this unreal, orderly mathematical world,
which is almost like a psychotic fantasy that an obsessive-compulsive
personality might have come up with,
can produce very useful applications in terms of the real world.
This is a fascinating paradox.)
When one first thinks about the set of points at which
a power series would converge,
one might imagine that it could have any conceivable form,
or even be completely formless.
To me, the fact that this set of points in fact forms
the interior of a circle\,---\,geometry's most perfect figure\,---%
is beautiful.
But furthermore, when one looks at complex numbers
one discovers that the radius of convergence for a power series
is exactly what it needs to be.
The radius of convergence for a power series
will always be as big as it possibly can
in order to avoid having any singularities of
corresponding function within the circle of convergence.
(Unfortunately, though, I don't know any proof of this
simple enough to include here.)
If we look at the expansion for $1/(1-x)$
as a series in powers of~$x$, for instance,
we find that the radius of convergence is~1.
This makes sense: it couldn't be any larger than~1
since the function $1/(1-x)$ blows up at $x=1$.
Likewise, the radius of convergence for
the expansion of~$\ln(1+x)$ as a series in powers of~$x$ is~1,
which is just small enough to avoid the point
$x=-1$ where the function blows up.
On the other hand, if we think only of real numbers,
the function $\tan\inv x$ has no bad points.
It is continuous, differentiable, and even analytic
for all real numbers~$x$.
So then why does its power series
\[ 1 - \dfrac{x^3}3 + \dfrac{x^5}5 -\dfrac{x^7}7 +\cdots \]
have~1 as its radius of convergence rather than~$\infty$?
As long as we only consider real values for~$x$,
it doesn't make any sense.
But if we look at complex values it does.
Remember that the radius of convergence for $\tan\inv x$
will be the same as the radius of convergence for its derivative.
And the derivative of $\tan\inv x$ is $1/(1+x^2)$.
Now $1/(1+x^2)$ can never blow up as long as $x$ is a real number,
since in this case $x^2$ is positive and so $1+x^2\=0$.
But if $x=i$ (where $i=\sqrt{-1\,}$),
then $1+x^2=0$ and so $1/(1+x^2)$ blows up.
And once we figure out how to define $\tan\inv x$
when $x$ is a complex number,
we will discover that this function also blows up
when $x=i$.
The radius of convergence for the power series
$\tan\inv x$ is thus as big as it possibly can be
and yet still exclude the singular point $x=i$.
\vspace{.3in}
\section*{\textbf{Differentiation and Integration of Power Series}}
\bigskip
If we take the Geometric Series
\[ \dfrac1{1-x}=1 + x + x^2 + x^3 + x^4 + x^5 +\cdots \]
and differentiate it in the obvious way,
we get an equation
\[ \dfrac1{(1-x)^2}= 1 + 2x + 3x^2 + 4x^3 + 5x^4 +\cdots\,. \]
(Note that the differentiation of the left hand side
produces two minus signs which cancel each other.)
On the other hand, if we integrate the geometric series
in the obvious way we get an equation
\[ -\ln(1-x) = C+ x + \dfrac {x^2}2 + \dfrac{x^3}x +\dfrac{x^4}4
+\cdots \, \hspace{4em}(C=0)\]
(where $C$ is the constant of integration,
which in this case must be equal to $\ln 1=0$).
These two results are actually correct.
In fact, simple algebra shows that
\begin{align*}(1-x)(1 + 2&x + 3x^2 +4x^3 +\cdots+(n+1)x^n) \\
= 1 + 2&x + 3x^2 +4x^3 +\cdots+(n+1)x^n \\
- &x - 2x^2 - 3x^3 - \cdots \qquad - nx^n-(n+1)x^{n+1} \\
=1 + &x + x^2 + x^3 +\cdots\quad\qquad +x^n - (n+1)x^{n+1}
\end{align*}
and so
\begin{align*}(1-x)^2(1 + 2x + 3x^2 +4x^3 +\cdots+(n+1)x^n) &=
(1-x)(1 + x + x^2 +\cdots + x^n -(n+1)x^{n+1}) \\
&=1-(n+2)x^{n+1} -(n+1)x^{n+2}\,.
\end{align*}
Since $\lim_{n\to\infty}x^n=0$ for $|x|<1$,
it follows that
\[ 1 + 2x + 3x^2 + 4x^3 +\cdots
= \lim_{n\to\infty}\dfrac{1-(n+2)x^{n+1}-(n+1)x^{n+2}}{(1-x)^2}
=\dfrac1{(1-x)^2} \]
for $|x|<1$ and the series diverges
for $|x|\geq 1$.
The correctness of the second formula
can be verified if one knows that Taylor
Series expansion of the natural logarithm function,
except that in this case the series also converges
for $x=-1$ by the Alternating Series Test.
It may seem quite obvious that, in general,
if
\[ f(x) = a_0 + a_1x + a_2x^2 + a_3x^2 +\cdots \]
then
\[ f'(x) = a_1 + 2a_2x + 3a_3x^2 +\cdots\,. \]
But this, although true, is not as obvious as it might seem.
Because the notation makes an infinite series
look like merely a process of adding an enormous
number of things up,
it is tempting to assume that all the things
that work for algebraic addition
will also work for infinite series.
But in fact this is not always the case,
and there do exist infinite series
where differentiation does not yield the result
one would expect it to.
This is shown by the fact that the series
for $\arctan x$,
\[ x - \dfrac{x^3}3 + \dfrac{x^5}5 -\dfrac{x^7}7 +\cdots\,, \]
converges for $-1\leq x\leq 1$,
whereas the derivative series,
\[ 1 - x^2 + x^4 - x^6 +\cdots\, \]
(which represents the function $1/(1+x^2)$
does not converge for $x=\pm1$.
A less simplistic example is the series
\[ \cos x + \dfrac14\cos8x +\dfrac19\cos 27x +\cdots
+\dfrac1{n^2}\cos n^3x+\cdots\,.\]
(Note that this is not a power series.)
Comparison to the series $\sum \dfrac1{n^2}$ shows that
this converges absolutely for all~$x$.
But if we differentiate it term by term,
we get
\[ -\sin x -2\sin8x -3\sin 27x -\cdots -n\sin n^3 x
-\cdots \]
which seems to clearly diverge for most values of~$x$.
(The derivative series clearly converges when
$x$ is a multiple of $\pi$,
since all the terms are then~0.
But when $\Fstrut x=2k\pi+\dfrac{\pi}4$, for instance,
where~$k$ is an integer,
the series looks like
\begin{align*}
&-\sin \dfrac{\pi}4 -2\sin\dfrac{8\pi}4
-3\sin\dfrac{27\pi}4 -\ 4\sin \dfrac{64\pi}4
- 5\sin \dfrac{125\pi}4
-\cdots -n\sin \dfrac{n^3\pi}4 +\cdots \\[6pt]
=&-\dfrac{\sqrt 2}2 - 0 - 3\cdot\dfrac{\sqrt 2}2
- 0 +5\cdot\dfrac{\sqrt 2}2- 0 +\cdots\,,
\end{align*}
which clearly diverges since we keep seeing
larger and larger terms:
for odd~$n$ we have $\pm\Fstrut\dfrac{n\sqrt 2}2$.
This shows that the domain of convergence for a series
which is not a power series need not be
an interval.)
It turns out, however, that
\textbf{in the case of power series},
differentiating and integrating
in the obvious fashion does always work
in the way one would hope,
the only exception being at the endpoints
of the interval of convergence.
To see why this is true,
let's start by looking at the radius of convergence
of the differentiated series.
Recall that we observed above that the radius of convergence~$R$
for a power series $\sum a_nx^n$
is the point on the positive number line that separates
the set of~$x$ such that
$\lim_{n\to\infty}a_nx^n=0$
from those such that $\lim_{n\to\infty}|a_nx^n|=\infty$.
In particular, for $|x|>R$
then $\lim_{n\to\infty} |a_nx^n|=\infty$.
But then
\[ \lim_{n\to\infty} |na_nx^{n-1}|=\dfrac1{|x|}\lim_{n\to\infty}|na_nx^n|
\geq\dfrac1{|x|}\lim_{n\to\infty}|a_nx^n|=\infty\,, \]
so clearly the differentiated series
$\sum na_nx^{n-1}$ diverges.
In particular, if $R=0$ then the differentiated series
always diverges for $x\=0$.
Now suppose for the moment that the power series $\sum a_nx^n$
has a radius of convergence~$R$
which is neither zero nor infinity.
Then we have seen above that
the tweaked form of the comparison test
can be used to see that, as respects convergence,
the series $\sum a_nx^n$ behaves exactly like
the Geometric Series
\[ \sum \left(\dfrac xR\right)^n \]
for $|x|R$ (where both series diverge).
(The behavior of the series when $x=\pm R$ is a much more
delicate matter and varies depending on the specific series.)
But the comparison test can then also be used
to show that the series
$\sum na_nx^{n-1}$
and
\[ \sum n\left(\dfrac xR\right)^{n-1} \] have the same radius
of convergence.
But the radius of convergence
for $\sum n\left(\dfrac xR\right)^{n-1}$
has already been shown to be~$R$.
The reasoning here is simplest if,
as is often the case, $\lim_{n\to\infty}|a_n|R^n<\infty$.
(This would always be the case, for instance,
if the power series converges at either of the two endpoints
of its interval of convergence.)
From this we get
\[ \lim_{n\to\infty}\dfrac{|na_nx^{n-1}|}{|n(x/R)^{n-1}|}
= \dfrac1R\lim_{n\to\infty}|a_nR^n|<\infty\, \]
(recall that we are temporarily assuming that
$R\=0,\,\infty$),
so the limit comparison test shows that
$\sum na_nx^{n-1}$ converges absolutely whenever
$\Fstrut\sum n\left(\dfrac xR\right)^{n-1}$ does.
Since it was shown above that $\sum n\left(\dfrac xR\right)^{n-1}$
converges absolutely when $|x/R\,|<1$ and diverges for $|x/R\,|>1$,
it follows that
\[ 0 + a_1 + 2a_2x + 3a_3x^2 + 4a_4x^3 +\cdots \]
converges for $|x||a_n|$.)
Now if $\lim_{n\to\infty}|a_n|R^n=\infty$,
then we have to use a little more finesse to prove convergence for
$|x|~~