How Does One Do Mathematical Research?

(Or Maybe How Not To)

Lee Lady

 

A student once send me email asking me how one goes about doing research in mathematics. I guess that one of the first thoughts that crossed my mind was, "Boy, did you ever ask the wrong person!" Doing research was for me never easy, and certainly not a thing that I ever thought I knew how to do.

Aside from this, it seemed to me that this was like the perenniel question asked of science fiction writers (and in fact all writers): Where do you get your ideas?

When asked this, Harlan Ellison used to respond that he would send a twenty dollar bill and a self-addressed stamped envelope to a certain address in Schenectady, and a few weeks later he would be sent an idea.

Isaac Asimov used to say that he would get lots of ideas for stories while shaving. (Somehow I suspect that he did not use an electric razor.) I've never heard of any mathematicians who got their ideas by shaving. If there's any merit to this approach, it certainly puts women at a disadvantage.

When Einstein was asked how one gets scientific ideas, he replied, "I wouldn't know. I've only had two or three ideas in my whole life."

A less facetious answer than Harlan Ellison's was given to me by an artist I once knew: "Ideas grow out of other ideas." At its best, I think that that's the way things work. More precisely, sometimes it is one's own past work (or, more often, current work) that one gets ideas from, and sometimes they come from thinking about other mathematicians' work.

For my part, I think that this review of my work will show that my own forte was using other people's ideas, especially for purposes for which they were not originally intended.

In any case, ideas don't come out of nowhere.

In my case, it was usually a matter of scrounging. If there is such a thing as mathematician's block, similar to writer's block, then it is certainly something I have had a great deal of first hand experience with.

 

How One Learns to Do It

For most mathematicians, the process of learning to do research is fairly standard. One starts out by taking classes in graduate school, and doing a lot of homework problems, almost all of which either require one to prove a certain statement or to given an example of a certain phenomenon. ("Give an example of a ring which is not noetherian but is locally noetherian.")  Solving these homework problems involves taking pieces of reasoning that one has seen in class or in books and using these pieces of reasoning in slightly different ways.

Working on problems like this, a mathematics student starts learning the moves, as an actor might say. Or learning his chops (or his licks) as a jazz musician might say. And then eventually, after taking enough courses and doing enough homework problems, one passes the departmental comprehensive exams and finds a dissertation adviser, who will recommend a journal article for the student to read. (In a few cases, the student will already have some experience reading journal articles, but more typically not.) Somewhere in this article, the advisor believes, there will be ideas that are worth working on and which suggest questions which the student may stand a reasonable chance of solving. Often the advisor will suggest a particular question which seems a promising one to investigate.

Closed-ended Questions and Open-ended Questions

In my experience, there are two types of mathematical questions: the open-ended one, where one doesn't even know for sure what one is looking for, and the closed-end one. A typical closed-ended question might ask, "Is X true?" (or perhaps "Under what conditions is X true?" which is slightly less closed-ended) whereas the typical open-ended question might have the form, "What can you say about Y?"

There is a particular kind of open-ended idea which I call the blue-sky idea. Namely, someone sitting at his desk with his feet up and looking out the window has come up with some concept, and now hopes that he or someone he knows can come up with a good way of using that idea. At conferences one sometimes encounters mathematicians who specialize in this sort of idea, button-holing everyone who can't manage to avoid them and inflicting their most recent blue-sky idea on them. My first advisor, when I was at UCSD, was a mathematician who had done some quite illustrious work in analysis, and now wanted to start doing work in algebra. But, in my opinion, he had no idea how to go about doing this. He gave me a blue sky idea. I had no idea how to work on it, and when I told him this he just more or less waved his hands. One day I was complaining to one of my fellow graduate students that I hadn't made any progress at all on my dissertation, and he asked me, "Are you working forty hours a week on it?" I don't think I gave him any response at all. I was too embarrassed to tell him that I didn't know how to go about working even one hour a week on it.

It was very refreshing when I got to New Mexico State and was able to find an advisor, viz. Fred Richman, who actually had quite a bit of experience doing algebraic research. Working with Richman was an interesting experience. I learned never to stop by his office unless I had the rest of the afternoon free. He would ask me, "Well, what have you done since last week?" And I would reply, "Nothing at all," and then tell him about some of the things I had tried that had turned out to be dead ends. And then he would ask me questions. I'd try my best to answer his questions, because I was too embarrassed to admit that I'd never thought about those things at all. And by the time I left his office, I felt that maybe I was getting somewhere after all.

When one looks at the history of mathematics, it may seem that a lot of the most important developments have come out of blue-sky ideas. But in fact, from what I know, this is almost never the case. Good ideas always arise out of some existing line of thought, and it is only after when has put a lot of effort in that one realizes that there is some gradiose general concept that underlies all one's work.

For instance, I am pretty sure that Eilenberg and Steenrod didn't sit down together over a beer one day and say, "Wouldn't it be neat to draw up a set of axioms for a thing called a category and then invent a concept called a functor, and then see if these would be useful for anything?" It's pretty clear that in fact they noticed that in algebraic topology the same sorts of situations keep coming up over and over again, and one keeps seeing different theorems with different subject matter, where somehow the proofs always turned out to be more or less the same. So they saw (I believe) the need for a vocabulary and a conceptual framework that would enable mathematicians to talk about all this in a unified way. Ergo: category theory.

 

How I Did It

The best way I can think of to answer the question of how mathematical research is done is by talking about how a lot of my own research came about. Unfortunately, this involves talking about an enormous amount of technicalities. Originally, I was hoping that I could provide enough explanation so that a graduate student in mathematics could get some general sense of what I was talking about, but as it turns out, I find myself completely incapable of doing that. In fact, nobody is going to be able follow the details here. But maybe a few people will be able to get some general sense of what it's like to be involved in mathematical thinking.

By way of brief explanation, though, let me say that most of my research, except for my dissertation, has involved mathematical structures called finite rank torsion free abelian groups. Surpisingly enough, unlike most similar structures, in this case there are some shortcuts which enable one to fairly easily describe what these are. (It's quite a bit more difficult to define the individual words than to define the term as a whole.) But having that description won't get you very far.

A finite rank torsion free abelian group can be thought of as a configuration consisting of vectors (i.e. singly-indexed arrays) where the entries are certain rational numbers. The group is determined by specifying exactly what rational numbers occur in which coordinates. For instance, as a very simple example we might consider the set of all rational numbers which have the form  (x+z,y+z)  where x is allowed to be an allowed to be an integer or a rational number whose denominator is a power of 5, where y is an integer or a rational number whose denominator is allowed to be a power of 6, and z is an integer or rational number whose denominator is a power of 11. Thus this group would contain entries such as (1/5, 0),  (0, 1/6),  and (4 + 17/11, 9/6 + 17/11). The essential thing is that what interests us is the group as a whole, and not the individual vectors that make up the group. Somehow or other the group as a whole has a shape (for want of a better word), and this shape is what we care about.

The vectors in the group will always all have the same dimension, and in most cases this dimension will be what's called the rank of the group, which can be very large. The are four general rules that must be respected: (1) zero, i.e. the vector whose coordinates are all  0, must be in the group; (2) if any element (i.e. vector) is in the group, then its negative must also be in the group; (3) if an element is in the group, then any multiple of it must also be in the group. (This applies to integer multiples, not necessarily to fractional multiples.) (4) if two elements are in the group, then their sum and difference must also be in the group. To be interesting, a group should not be finitely generated. I.e. it needs to contain more elements than can be obtained by combining the vectors in some finite set. Groups where the entries in the vectors are all integers also turn out to be extremely simple, and thus uninteresting. As illustrated above, in constructing examples, it is usually the denominators which appear in various positions in the vectors which determine the overall shape of the group.

Finite rank torsion free groups are thus very simple mathematical structures, but the resulting theory is not at all simple.

The fundamental question in the theory is to describe all possible finite rank torsion free groups in a coherent structured way, i.e.  to "classify" all finite rank torsion free groups. Since this question is much too difficult to ever be answered, it can be reframed as the problem of finding qualitative ways to describe groups which will provide useful information about the group. A starting point is the observation that two groups which are not the same can nonetheless be essentially identical in their algebraic structure. In this case, the groups are called isomorphic. Many of the questions which one asks about finite rank torsion free groups will include the phrase "up to isomorphism."

One important way of describing a finite rank torsion free group is to see whether it breaks apart into a number of smaller pieces, i.e. into a direct sum of two or more smaller groups. For instance, if a group G consists of all vectors of the form (x/2^n, y/2^n + w/7, z/3^n +w/7), where w, x, y, z, and n are allowed to be any integers, then it is clear that every element in G can be broken up uniquely into a sum (x/2^n, 0, 0) + (0, y/2^n + w/7, z/3^n + w/7). This means that G is the direct sum of two subgroups H and K, where H consists of all elements whose last two coordinates are zero, i.e. those of the form (x/2^n, 0, 0). And K consists of all elements whose first coordinate is zero, i.e. those which can be written (0, y/2^n, z/3^n) + (0, w/7, w/7). On the other hand, K cannot be broken up into a direct sum. If one attempts to find subgroups L and M of K which would work, it is fairly easy to see that one of these subgroups must contain all the elements (0, y/2^n, 0) and the other must contain all those of the form (0,0, z/3^n). (This is not completely obvious, but it's not difficult.) It follows that the first piece must contain the element (0,1,0) and the second piece must contain (0,0,1). This leaves out the element (0, 1/7, 1/7). If one now also includes this element in L, for instance, this would mean also including its multiple (0, 1, 1). But then one gets a contradiction to the requirement that each element in K can be obtain in only one way as a sum of elements from L and M.

Not many readers will be able to follow the preceding paragraph, especially since I've been rather sloppy in the way I've stated things. But what I'm giving your here is not a tutorial on finite rank torsion free groups. I simply want to give you some vague sense of a few of the basic words.

The most important point though is that direct sum decompositions are usually not as obvious as the ones in this example.

It is reasonable to ask why anyone would want to study groups of vectors of this sort. And I don't have a really good answer, except that it is a long-standing subject of interest in mathematics. My own motivation was also simply that the subject was interesting and very challenging and has connections to a large number of other parts of algebra. The example I've given may seem simplistic, and in fact it is far simpler than most examples in the theory. But in any case, many examples in other parts of mathematics, such as the theory of finite non-abelian groups, which has very important applications in physics and other sciences, look equally simple if presented in a way stripped down to their bare bones. And the methods which one develops in the process of studying finite rank torsion free groups are not necessarily limited to this area.

A Generation Gap

An interesting sidelight here is that fact that I came into mathematics about roughly the time of a great generational gap among mathematicians: namely, those who thought in terms of category theory and those who did not. My graduate work at the University of Maryland had been for the most part what one might call Old School. Categories and functors were mentioned, of necessity, in the algebraic topology course I took, but even there they weren't really prominent.

But when I arrived at UCSD, Serge Lang's new algebra book had just come out, doing almost everything in terms of category theory. The graduate algebra course was now taught out of Lang, and since I was going to have to take the comprehensive exams at UCSD, that made it pretty clear that I needed to learn Lang's approach. And furthermore, I found myself giving quite a bit of help to my friends among the other graduate students, and that helped me learn the new approaches as well. And I found that I, like most of the younger mathematicians I met, absolutely loved category theory.

On the other hand, I was constantly encountering older mathematicians, especially around the time I first got my Ph.D., who went out of their way to tell me that they didn't see what the big deal about category theory was. They told me that they could prove things just as easily without it, and a lot of it seemed to them like just a matter of stating fairly simple things in a very complicated way. And I would just shrug. What they said seemed to be true, but I still really liked category theory. And gradually I came to think that the appropriate rejoinder to these complaints was that category theory is not primarily a method for proving theorems, but rather a framework for structuring and organizing the knowledge one has.

Well, I have to admit that the course on topos theory I took from John Gray during the year I was at the University of Illinois did seem like a whole lot of stating simple things in a complicated way.

But category theory was essential to the work I did on torsion free abelian groups. Or at least it seemed that way to me. I didn't even want to imagine what it would be like to prove the theorems I had, or even to state them, without having the framework of category theory. And it's interesting to me that the theory of finite rank torsion free abelian groups, which in the way I've been describing it seems so simplistic, seemed to almost inevitably lead into category theory and homological methods, this new rather sophisticated, and to some older mathematicians rather formidible, way of looking at things.

Having been given this much information, you will be well prepared to be completely lost, especially since I will first talk about my dissertation, which concerned a completely different kind of abelian groups that can't be described this way at all.

 

Having the Right Pieces and Being Able to Put Them Together the Right Way

Proving a mathematical theorem or constructing a worthwhile example involves taking a number of pieces and putting them together in a new way. When one can't make progress, it may be because one is not aware of enough pieces. Or it may be that one has all the pieces one needs, but can't manage to get them to fit together. The worst of it is that one usually doesn't know which of these two difficulties is the obstacle one is dealing with.

As a simple example of the first kind, I remember sitting in the library listening to two graduate students try to prove something for a homework problem in a topology course. Namely, they were trying to prove that every finite set is closed. They went through a list of about ten things they knew about closed sets. (This was close to the beginning of the course.) They knew that single points were closed, but they quickly rejected that as not useful: the problem was not about single points. They also knew that finite unions of closed sets were closed, but that fact also seemed useless, since they were only concerned with a single set, not with a union of several.

Several thoughts crossed my mind. One was certainly the thought that these two women were incredibly dense. But it wasn't clear that this was a fair judgment, since these two were obviously at the very beginning stages of learning mathematical thinking. The other thought was that I should go over and simply explain to these two students that the answer they needed was sitting right in front of them. But wasn't sure that doing this would help them in their growth as mathematics students.

In any case, at many times in my own life I have had this same experience of having all the pieces I needed, but not understanding for several weeks or months how these pieces fit together.

It would seem that any reasonably competent algebraist in a particular area, say abelian group theory, would have all the pieces which are required to prove theorems in that area. My own case shows that this is not completely true; a lot of my biggest successes were due to the fact that I knew things that other abelian group theorists did not. But of course there was also the fact that I was able to see how these things from other fields could be applied to abelian group theory.

But it's not just matter of knowing things. It's a matter of having experience in using the things that one knows.

My dissertation to a large extent involved looking at abelian groups from a topological point of view, although the original question I was investigating made no mention of topology.

Now there was nothing original about looking at abelian groups from the point of view of topology. The new edition of the first volume of Fuchs, the fundamental text on abelian group theory, devoted a chapter to this. Furthermore, every graduate student knows quite a bit about topology, since it is one of the topics for the mathematics comprehensive exam. Furthermore, my advisor Fred Richman had a particular interest in the use of topology in abelian group theory.

So it was not that I had any knowledge that anybody else was lacking. But topology had always been a big interest for me, and I had worked my way fairly thoroughly through several books on the subject. So I might mention a particular fact in topology, and another student or might say, "Oh, sure, I remember that," but for me it was something that would come to mind right away, whereas it might not occur to my fellow student that it might be relevant. For me, it was an easy thing to invent a new topological structure on an abelian group which was what was needed for my theorem. And I was very aware of the fact that the topologies one imposed on abelian groups made these groups into metric spaces. (This fact was in Fuchs. It wasn't that others weren't aware of it, but just that it was not a fact that was used very often, so I think that most others were not very aware of it. But it's hard to say.)

Later on, a topologist would say to me, "But what you abelian group theorists use is just a kind of kindergarten topology. You never use any really deep theorems from the subject." And to the best of my knowledge, this is almost completely true. But I'm certainly glad no one said this to me at the beginning of my work on my dissertation, because I might have abandoned the topological proof as, if not a dead alley, then at least a pointless detour, and then I almost certainly would never have been successful in proving the theorem I was working on.

Because one day, listening to another student talk about his dissertation, I head him mention the Baire Category Theorem. This is a moderately deep theorem in elementary topology that every graduate student learns, but it is seldom used by topologists. Instead, it is very commonly used in analysis, which is the branch of mathematics which is mostly the furthest away from algebra. And since analysis had always been a weak point for me, it occurred to me that it would be really neat if I could incorporate the Baire Category Theorem in my dissertation. And so from then on, I was always on the look out for a way to do this. And eventually it turned out that this was exactly what I needed.

But my familiarity with topology wasn't the main reason that I was able to prove the theorem I needed for my dissertation. The main thing involved was a lot of hard work, and a fair amount of desperation. The result was published as "Countable torsion products of abelian p-groups," Proc. Amer. Math. Soc. 37(1973), pp. 10 - 16.

 

Why I Switched to the Torsion Free Groups

Several fairly prominent abelian group theorists were impressed by my dissertation, primarily because they had tried fairly hard themselves to prove the theorem I did. My advisor himself had told me from the beginning that he himself had tried unsuccessfully to prove it. But I was never all that thrilled with the disssertation myself, because it never seemed clear to me that the theorem was all that important. And it did not seem to me that it was likely to lead elsewhere.

In fact, my impression at that time was that abelian group theory had pretty much come to the end of the road as far as looking at torsion groups without elements of infinite height, which was the kind of group my dissertation dealt with.

The word on the street was that the real action now was in finite rank torsion free groups. This area had the reputation of being fairly difficult, as well as a bit strange, and there was not an enormous lot of existing research there. So when I learned that Dave Arnold was going to be offering a course on finite rank torsion free groups in the spring semester, I quickly signed up for it.

A great deal of the theory of finite rank torsion free groups at that time consisted of a large number of examples of groups exhibiting very bad behavior as far as direct sum decompositions went. In fact, at first I could simply not believe that direct sums could behave the way these groups did. So I would look at one of these examples, go through the calculations, and see that what was claimed did in fact happen. Then a day or so later, I would be thinking of that example again, and think, "No, there must be a mistake; that can't work." And I would go back to the published calculations, trace through them again, and again see that the impossible did in fact happen. It was only after coming back to one of these examples three or four times that I would finally understand the basic principle that made it work.

The main point is that often a finite rank torsion free group can be broken into pieces (via a direct sum decomposition) in two or more different ways. It is certainly natural to expect that when this happens, the two sets of pieces would look essentially identical (up to isomorphism), as is well known to be the case in many other parts of algebra. One would not expect, for instance, that a group of rank 6 could break apart into two groups of rank 3, each not further decomposable, and would also break apart onto an indecomposable group of rank 4 plus one of rank 2. But a number of years before I came on the scene, someone had given examples showing that things like this could indeed happen.

From these examples, it was known that there exist groups of fairly small rank (say 3) which have a thousand different mutually non-isomorphic direct sum decompositions. Or even a million for that matter. (It would always break apart as a group of rank 1 plus a group of rank 2. But there would be a million possible ways of doing this, yielding a million different non-isomorphic groups of rank 2, despite the fact that the rank-one groups would all be isomorphic.) But obtaining an extremely high number of non-isomorphic summands of thie sort involves constructing a group which is in its way quite complicated. Or actually, that's not quite true; one just needs to use very large numbers. But the point is that one needs to tailor the group according to the number of summands one is trying to achieve.

The question then arose as to whether there could exist a specific group with finite rank that could have an infinite number of direct summand decompositions, all mutually non-isomorphic. (This is a good example of a closed ended question.)

Thinking about the calculations I had been doing, I was able to prove that there could not exist such a group with rank three, and although this result was not very difficult, it did merit me a footnote in the new edition of Fuchs's text, which I will say I was rather proud of. (At this point I was still a graduate student.)

Since the possible existence if a group with finite rank having an infinite number of non-isomorphic summands was such a well known question, of course I thought it would be really neat to find the answer, whether affirmative or negative. But although I thought about this question quite a bit in quite a few different ways at various times over the next year or so after getting my Ph.D, for some reason I never actually put in a lot of work on trying to solve it. I was always hoping that I could find some known result from outside abelian group theory that could be applied to give me the proof.

Over the course of my career, I put in a lot of extremely hard work proving the theorems that I did, often involving a lot of very hard calculation. (Several times, after finding what I needed by dint of hard calculation, I was able to later come back and find a different way or presenting it, making it seem more "elegant.") But many of the results I proved that most impressed other mathematicians were obtained very cheaply, by using results that I knew from fields that seemed far removed from abelian group theory and which most abelian group theorists either weren't aware of, or at least had never paid much attention to. And that was the case with this particular theorem.

One day, about half a year after I had finished graduate school and taken a job at the University of Kansas, I was in the library reading about a theorem someone had proved in the theory of integral group representations, to the effect that certain classes of modules were finite. No relation to abelian group theory or to my problem, except that what I was doing was trying to prove that the set of possible summands of a finite rank group was finite and this result proved that a very different set of things was finite. So I thought that it might be worthwhile looking up the proof of this result. But to my disappointment, the published proof simply made reference to a theorem I'd never heard of called the Jordan-Zassenhaus Theorem. But after a day or so, I thought that maybe I ought to check this Jordan-Zassenhaus Theorem out, to see if it might have any conceivable relevance to the problem I was interested in.

And at first it didn't seem relevant at all. It showed that a certain sort of rings could not have an infinite number of non-isomorphic ideals. But then, after a day or so, I realized that with this, I could at least prove a special case of what I needed. Namely that if the endomorphism ring of an abelian group had certain especially nice properties, then that abelian group could not have an infinite number of non-isomorphic summands. It was a sort of stupid little obvious proof, in fact, but because the problem was so well known, I thought that I could probably get away with publishing this special case. Then over the next few days I started thinking that maybe the endomorphism ring didn't have to be so special after all. I could use a fairly standard trick in non-commutative ring theory (a subject which at at this point was seen as fairly far removed from abelian group theory) to extend the result a little further. And then over the next day or two, I'd realized how I could use this Jordan-Zassenhaus Theorem from non-commutative ring theory to completely answer my problem in abelian group theory. In all, my article required only two printed pages in the Journal of Algebra, which pleased me because the Journal of Algebra was usually not willing to published brief papers.

So this was a case where being able to prove a theorem depended primarily on knowing the right thing. Except I hadn't originally known the Jordan-Zassenhaus theorem. It's as if some mathematical muse had perched on my shoulder and whispered in my ear that it might be worth looking into that.

In my early days doing mathematics, whenever I would start to make substantial progress in investigating a certain question and the resulting paper would start to shape up, I would go into a state of mild panic, for fear that someone else would be working on the same thing and would have a paper ready before mine was. Later on, I learned not to worry about this so much. For one thing, this sort of mathematical scooping is not really very common. And for another thing, most of the questions I worked on were so weird that it was very unlikely that someone else would also address them.

But in this case, the problem was so well known that it was not surprising that other abelian group theorists would also be making it a target. Many months after I finished my paper, it was accepted for publication but had not yet appeared in print. (This is normal lag time for mathematical articles.) And then I got word that another mathematician, Chang Mo Bang from Emory University, had just submitted a paper proving the same theorem. He was proposing to present his proof at an upcoming conference, but now, having been informed that I had already proved the theorem and that my paper was already in press, he withdrew his talk. It was a very sad story, because his proof would have made him deservedly famous within the abelian group community. I and some other mathematicians tried to see that he got some credit for his result, and for a while people were calling it the Lady-Bang Theorem, but that was slight compensation for the fact that he didn't have the publication under his name in a journal, and so his result would be of no help in getting him tenure or promotion. (Maybe he had those already though. In any case, he's still at Emory.)

And what made it really sad was that whereas I had worked only about a week on the theorem, he had put an enormous amount of effort into his proof. He had never heard of the Jordan-Zassenhaus Theorem and essentially had reproved it from scratch.

My proof appeared as "Summands of finite rank torsion free abelian groups," J. Algebra 32(1974), pp. 51 - 52.

 

The interesting thing about the work in torsion free groups that I've mentioned so far is there is no point at which one can say that I really had an idea. Except maybe the childishly silly idea that if some other mathematician had managed to prove a set of things finite, then I might manage to use the same approach to prove some completely different set of things finite.

 

Sometimes You Start With the Answer

In my younger days, I used to be quite envious of mathematicians who lived long ago, before the twentieth century, say, because they seemed to have so many opportunities to become famous for theorems whose proof only required a few sentences. (Schur's Lemma, i.e. the theorem that says that the endomorphism ring of a simple module is a skew field (division algebra) is a good example.)

But later on, I had occasion to look in some fairly old books, and I found that the theorem whose proof we now know as as requiring only a few sentences, originally had a proof consisting of a page or two of difficult calculations.

While I was still in graduate school, Dave Arnold found a somewhat similar simplification for the proof of a result in a paper he had been refereeing by Chuck Murley. Dave took a very familiar idea in algebra and used it in a way it had never been used before in abelian group theory. The paper attracted considerable interest within the torsion-free groups community, and I believe there later were quite a few follow-up papers written by various mathematicians. I will give a description of Dave Arnold's idea which is fairly technical, but maybe a few readers will be able to ignore the technicalities and get some general sense of the simplicity of the idea.

What Murley had been concerned with was a particular type of group. I will let A denote a group of this type. Murley had proved that the endomorphism ring of A was a principal ideal domain, which I'll call E. The important thing about this is that modules over a principal ideal domain have an extremely straight-forward theory. On the other hand, if it is a well known fact in elementary ring theory that if we think of E as a right E-module (actually, left and right are irrelevant in this case, but I'll give the general result) then its endomorphism ring is itself isomorphic to E itself, i.e. the same as the endomorphism ring of A. Stated in terms of category theory, this says that the category whose sole object was A is isomorphic to the category whose sole object is  E (and where morphisms are E-linear). And since it is a standard fact that additive functors (i.e. just about all the functors one encounters in this kind of algebra) behave extremely well with respect to direct sums and direct summands, it was then pretty much self-evident that the only direct summands of a direct sum of a finite number of copies of A were themselves sums of direct copies of A. Pretty obvious, in retrospect, but Murley had put in quite a bit of work to prove it using his approach.

For purposes of the present discussion, I will refer to this technique as the Arnold Trick.

A year or two later, I got curious about something in non-commutative ring theory called Morita Equivalence. I found a set of lecture notes on the subject in the library, and eventually I discovered that the Arnold Trick was more or less an example of Morita Equivalence, although a bit more complicated.

About the same time the Dave was developing his Arnold Trick, I was reading a paper by M.C.R. Butler which Arnold had recommended in his course. Somehow in this paper Butler had got away with including some new proofs of some standard theorems, although in most cases journals will not publish new proofs of existing results. These proofs appealed to me a great deal more than the usual ones. Because Dave Arnold's recent work had sensitized me to the topic of endomorphism rings, I noticed that what Butler was really using in some of his proofs was the fact that the endomorphism ring of a rank-one group is a principal ideal domain. And then I realized that some classical theorems which had been around for at least half a century (Baer's Lemma and the like) were actually valid for a much wider class of groups than people had realized. And the upshot was that Dave and I consolidated our results into a joint paper. We wanted to publish it in the Journal of Algebra, which at that time published very few papers in abelian group theory. Dave thought that it might work to submit it to the editor in chief, Graham Higman. What we weren't aware of was that Graham Higman's desk was a notorious black hole (or Bermuda triangle), where papers disappeared never to be seen again. During the ensuing two-year delay, I proved a number of major improvements on our results and was considerably annoyed, because if our paper had already been accepted, I could have published my new results as additional papers, whereas as it was, I had to include them in a revised version of the Arnold-Lady paper. Finally Dave succeeded in withdrawing the paper from the Journal of Algebra and the revised version was published (after another delay of about a year, which is about par for any mathematical paper) in the Transactions of the American Mathematical Society.  (D.M. Arnold and E.L. Lady, "Endomorphism rings and direct sums of torsion free abelian groups," Trans. Amer. Math. Soc. 211(1975), pp. 225 - 237.)

I have to acknowledge here that neither Dave Arnold nor I realized at the time quite how simple the Arnold Trick was, and our paper we felt the need to include some completely unnecessary calculations involving various functors and natural transformations. Dave Arnold's discovery of the trick was not a stroke of genius, although it did seem like it to me at the time. Indeed, the fact that nobody had noticed it previously was almost a stroke of idiocy, as it were. I think that almost anybody whose field was homological algebra or other aspects of modules over rings, especially non-commutative rings, would have spotted it immediately.

But the important thing about the Arnold Trick, and the ensuing Arnold-Lady paper, in my opinion, was that it moved the theory of finite rank torsion free groups into another realm. Up to then, in papers on the subject one usually saw a standard line of proof. One took an element in a group and looked the p-heights of this element for all prime numbers p. Then one took another element, and adjusted the two sequences of heights so they would match, or mismatch, as one needed. And then did various things with these elements. It was a nuts and bolts approach, as it were.

But in what Dave and I were doing, one didn't need to look inside the group at all. One looked it from an external point of view, looking at its endomorphism ring, which in some rough (and partly inaccurate) sense is the measure of the symmetries of the group. The point is that instead of using height sequences and other reasoning special to the field of torsion free groups, we saw that we could get almost everything we needed by using familiar principles of general algebra. And as a result, I believe, the realm of torsion free groups didn't seem like such an isolated island in the ocean of abstract algebra.

Butler's paper had led the way in this by pointing out that one could talk about types, as well as proving the basic lemmas that had always been at the core of the theory of torsion groups, without ever looking at height sequences. Butler's paper contained some very good theorems (some of which would soon be very important to me), but as far as I'm concerned, it is one of those rare papers whose real importance lies in the ideas it presented rather than its theorems.

On the other hand, what Dave and I did moved the realm of torsion free groups further away from the world of abelian group theory as a whole. To me, there was nothing objectionable in this.

 

My "Historic" Paper

After I left graduate school, I felt totally adrift, no longer having my advisor Fred Richman or Dave Arnold to discuss my work with. At this point I had three and a half papers to my credit (counting the footnote in Fuchs), and none of them seemed to be work that I could take any further. It was not at all clear to me that I would ever find another topic to do research on. But if I didn't, I would soon lose my job at Kansas.

In my desperation, I thought of a class of groups called almost completely decomposable groups, which were the simplest examples of the groups Butler had looked at in his paper and are only one step removed from completely decomposable groups, which are totally well behaved and thus not at all interesting. (The very simple example which I gave at the beginning of this article is an almost completely decomposable group.) So one might expect that almost completely decomposable groups would be fairly civilized. But in fact, all the standard examples of bad behavior in direct sum decompositions were almost completely decomposable.

Dave Arnold had once told me that almost completely decomposable groups were much too simplistic to be of any real interest. So now it occurred to me that maybe if these groups were really so simple, then I might have some chance of proving a few theorems about them.

Here one has an example of an open-ended question: "What can one say about almost completely decomposable groups?"

I needed to think of some questions which one might reasonably ask about these groups. And I thought about a paper Dave Arnold had once showed me in which he tried to investigate algebraic K-theory as applied to the context of finite rank torsion free groups. In fact, I didn't think very highly of this paper, but it was at least good enough to be published, and I had given it quite a bit of thought, in the hopes that I might manage to find some publishable results of this sort myself. Now it occurred to me that maybe if I restricted my attention to this very simple class of almost completely decomposale groups, then I could might be able to come up with something.

To be faced with the task of proving a theorem that has no obvious connection to what one knows is like standing in front of an impenetrable building with no doors and windows. It seems like there's no way in at all. But somehow one must find a crack in the facade or a basement window that can be pried ajar or some ribbon or thread or rope leading inside.

The situation is even worse when one has no idea what theorem one is looking for.

After the fact, mathematicians make up all sorts of fairy tales about the path they followed to get to the theorem. These are fairy tales they believe themselves. "Well, first I thought about X. And that led me to wonder about Y. And then I realized Z."

But in fact, the path one actually follows to get to a theorem is like the path of little Billy in the Sunday comic Family Circus. One wanders back and forth in every possible direction, constantly encountering dead ends and No Left Turn signs, and constantly getting back to places where one realizes one has already been several times before.

It would be a lie to say that I started out by asking, "What is is that makes almost completely decomposable groups fail to be completely decomposable?" (This is the sort of lie one gives when giving a lecture or teaching a class.) But it is true that I studied the classical theorems on completely decomposable groups. The main theorem here is one that says that a direct summand of a completely decomposable group is completely decomposable. And the proof of this theorem is almost a recipe for showing that a group is completely decomposable. And so I needed to see what goes wrong if one applies this recipe to an almost completely decomposable group.

What one sees in this proof is that completely decomposable groups exist in layers, corresponding to types (which more or less correspond to rank-one groups). Every element in any torsion free group has a type, with some types being greater than others. By looking at all the elements whose types are greater than or equal to some given type, we get a layer of the group. One can think of these layers as sort of like shelves.

The ordering of the types in the group is not linear, so that for there may be several shelves which are all exactly one step above a given shelf. To visualize the shelves in this way is something that even a familiarity with Picasso or M.C. Escher will not be adequate for. Still, impossible as it is to see this, it corresponds somewhat to the way I thought of these groups. (Of course I never wrote any of this down or even spoke it aloud to anyone else. It may be that mathematicians are all crazy, but we don't choose to advertise the fact.)

Now certain types are critical in that there is a major gap between the corresponding shelf and the set of shelves strictly above it. And the point is that in a completely decomposable group, this gap will be filled in with a layer of something like homogeneous bricks. There is a fundamental theorem that says that these homogeneous bricks must all be completely decomposable.

When one breaks a completely decomposable group into two direct summands, the summands will still have the same gaps and will be made up of pieces of the same bricks. This is why a direct summand of a completely decomposable group is itself completely decomposable. (All this is outrageously inaccurate. But it's the best I can do as far as giving some general idea of the proof to someone who knows none of the theory.)

Now what happens in an almost completely decomposable group is that sometimes one set of shelves doesn't sit snugly on top of the ones below it, so that that in addition to the major gaps between shelves, which are tiled with homogeneous bricks, there are also little cracks, which are filled in with putty, or with glue. For reasons of my own, I prefer to think of this filler as being glue.

For instance, look at the example above, where the group consists of all vectors of the form (x/2^n, y/2^n + w/7, z/3^n +w/7), where w, x, y, z, and n are allowed to be any integers. One can also write these elements in the form (x/2^n, y/2^n, z/3^n)+w(0, 1/7, 1/7), to emphasize the crucial point of the construction, namely that the number w is the same in the last two coordinates. If we take the subgroup consisting of those vectors of this form which are 0 in the last coordinate, notice that w in this case must be 0, so the vectors in this subgroup have the form (x/2^n, y/2^n, 0). It can be seen that this is a shelf. And if one takes those vectors which are 0 in the first two coordinates, this is also a shelf. These two shelves sit side by side on top of the group as a whole, which is also a shelf. But the fit is not snug, because the elements (0, w/7, w/7) on the bottom shelf are not covered by the top two shelves. The element (0, 1/7, 1/7) is the little bit of glue that fills in the crack between the bottom shelf (the whole group) and the two on top of it.

Each of the tiny bits of glue has a non-zero size. (We make the size be 1 when the fit between a shelf and the ones above it is snug. This doesn't quite match the shelf metaphor, but it's what is needed.) And the product (not the sum) of these sizes yields a number i(G) which is in a certain way a measure of how far the group is from being completely decomposable. (So  i(G) = 1  for a completely decomposable group.) The procedure described above will yield a set of shelves which work. I say a set of shelves, because the group may have several different possible arrangements into shelves. But these arrangements will all look the same, except for the little bits of glue, which are more flexible and more mobile than the word "glue" suggests. (It's almost like one of the models of atomic structure in solid state physics, wherein an electron is not usually specifically associated with an given atom, but the set of electrons as a whole bonds the solid together.) The fact that the glue is so flexible about where it bonds is that makes it possible for a group of rank 6 to break apart into two groups of rank three and for the same group to also break apart into three groups of rank 2.

In any case, this arrangement into shelves yields a completely decomposable subgroup which is nice and orderly and is something like the spine of the overall almost completely decomposable group.

If the reader is feeling totally lost and confused at this point, then he will have a good idea of the way I usually feel when I get about this far in proving a theorem. What I have given so far is not a proof. In fact, it was not even clear that at this point that I even had a theorem.

Note for the cognoscienti: The "shelves" I am referring to above are the subgroups G(t) for the various types t which are relevant for G. The shelf-glue metaphor is misleading because it doesn't reflect the fact that  i(G)  is actually a multiplier: for instance, if i(G)=12,  this indicates that the size of  G  as a whole is 12 times its size without the glue.

 

In part, the process of selling a theorem is a matter of packaging. There is a certain conciseness and preciseness which is expected of a theorem, and what I have described above just doesn't have that. But I persevered, and eventually came up with something more compact, in which all the stuff about shelves (restated in more standard mathematical terminology) never made it into the main statement of the theorem but constituted one part of the proof. Or maybe it was a preliminary proposition; I don't remember.

Finding a proof is also a matter of constantly making conjectures and testing them. One thinks, "I could definitely get where I need to be if X were true. So let's see. Oh, damn! Here's an example that definitely shows it's not true. Well, maybe Y is true instead. Oh hell, it's not true either." Testing out one's conjectures is much easier if one has a big supply of examples. Any mathematics course will tell you that an example is not a substitute for a proof, but with experience a good mathematician can definitely see from looking at a several of the right examples not only that a theorem is true, but how one can go about proving it. (Of course one should not try to justify oneself to editors and referees, or even colleagues, on this basis. It's also true that in certain parts of mathematics, such as analysis, the use of examples is much more treacherous than in others, because there are so many really weird possibilities that one is not likely to think of.)

I should also mention that some things in Butler's paper were very helpful to me in all this, but he didn't talk a lot about almost completely decomposable groups.

As it so happened, by a lucky piece of misfortunate I made an enormous blunder at this point which was included in the first version of the paper which I sent out to various notables in abelian group theory. Namely, I included a theorem that if one looks at all the completely decomposable subgroups C of G which are not contained in any larger completely decomposable subgroups, then the index [G:C] is always the same. (In those days before laser printers, and when use of the xerox machine was considered very expensive at universities like Kansas, these copies were sent out in purple ditto form. Luckily, that means they're probably all faded by now.)

I've always thought that this mistake was responsible for a part of the success of this paper. Because in the process of correcting it, I realized that I needed a name for those completely decomposable subgroups of G with smallest possible index. In mathematics, there's always a shortage of words, because all the good ones (words like normal, regular, simple, basic, and even nice) have been spoken for long ago, and usually used several times in different parts of mathematics. Finally, after wracking my brain and spending much time with the dictionary and thesaurus, and rejecting words such as rigidifying and straightening, I decided to call the groups I was concerned with regulating subgroups. I never much cared for this word, but some people have told me that it's perfect. The regulating subgroup is sort of the straight and perfectly orderly part of the group, and the rest of the group grows around it like....  I don't know a good metaphor. It's not exactly like coral growing on a reef, although I think of it as looking somewhat like that. Maybe it's more like a rose bush growing on a trellis. In any case, I think that having this new word in the paper made it look much more interesting and attracted a wider audience.

I think that mathematicians are always attracted to a new word. They think, Oh, here's a new concept. Maybe it's something I could write a paper on. (But the concept does have to live up to the word, otherwise people feel cheated.)

In any case, the question which I proposed to address was, "What must be true of two almost completely decomposable groups G and H in order that there exist some finite rank group L such that G &oplus L ≈ H &oplus L? (I.e. the direct sum of G and L is isomorphic to the direct sum of H and L?)" Stated in the abstract like this, especially stated in words with no symbols, it sounds like a rather contrived question. But in fact, in the context of people's interests at the time, it was a definitely important one.

In fact, although it had been inspired by Dave Arnold's rather lame attempt to apply algebraic K-theory to torsion free groups, this question actually went back to the very roots of the subject. The original examples of bad behavior which had at one time been almost the entirety of the theory of torsion free groups had all been almost completely decomposable groups. Eventually, after a period in which people had constructed an abundance of bizarre examples exhibiting every conceivable form of bad behavior (at little before I came on the scene), people had started addressed questions such as, "What must a group L be like in order that bad examples such as G &oplus L &asymp H &oplus L can never be possible when G and H are non-isomorphic?" And a number of papers were written about groups with properties called the Cancellation Property or the Substitution Property. And now my paper was taking the point of view, "Okay, so we know that G &oplus L &asymp H &oplus L can sometimes be possible without G being isomorphic to H. But in a case like this there must certainly be some strong ressemblance between G and H. What can we say about this ressemblance?" From the many calculations I had done, I sort of knew that G and H had to look very much alike. Furthermore, I knew that there had to be, so to speak, a very strong degree of interconnectivity (or interactivity) between G and L and between H and L. But the question was how to refine these insights and encapsulate them in a theorem.

The answer that I eventually came up with for almost completely decomposable groups was that the two indexes i(G) and i(H) must be the same and that each group must be isomorphic to a subgroup of the other having an index relatively prime to i(G).  (The word "index" is being used in two different senses here, in case anybody is managing to follow this at all.) I had a hell of a time proving this, but eventually (some time between the purple version I had sent out and the version that actually appeared in print), I realized that I could find a proof using a variation of the Arnold Trick, plus a little theorem that one sees mostly as a homework problem in graduate algebra courses or as a problem on compehensive exams in algebra. The resulting paper appeared as, "Almost completely decomposable torsion free abelian groups," Proc. Amer. Math. Soc. 45(1974), pp. 41 - 47.

The Arnold Trick and variations on it were things that I used a whole lot after that. This is characteristic for mathematicians. Proving a theorem is not just a matter of having the right pieces and seeing how they fit together, it's also a matter of using certain tools. For the most part, every mathematician in a certain specialty will have the same tools in his belt, or in his toolbox, or maybe in a cupboard down in the basement. But it seems that every researcher has certain preferred tools. If one is reading a major paper by Paul Hill on abelian group theory, for example, one can be fairly sure that eventually the Hill Back-and-Forth Method, devised in its original form by Kaplansky, will appear sooner or later. For me, the Arnold Trick was becoming a favorite tool.

Several years later, a fairly eminent abelian group theorist who should have known better referred to my almost completely decomposable paper as "historic." Certainly the main result was quite nice, and the paper eventually led to quite a few subsequent papers by other mathematicians, but I was never all that thrilled by it myself, because I still basically agreed with Dave Arnold's judgment that almost completely decomposable groups were too simple minded to be of much real importance.

The thing is that the world of almost completely decomposable groups is known territory. We know what these groups look like and we know how to construct them, and we know the phenomena they give rise to and how to create that phenomena. The one thing that's lacking is a systematic way to catalog them all, and I'm not sure there's any real need for that.

But quite a bit of research on the subject was produced after I stopped being interested in it, including a whole book by my colleage Adolf Mader at the University of Hawaii, which was finished quite a while after I became totally disenchanted with doing mathematical research at all. So since I haven't looked at any of this stuff, I'm not really qualified to make a judgment on it.

But for me, the important of the almost completely decomposable groups paper was that it indicated to me that maybe I was indeed capable of being a mathematician. My dissertation had impressed a a number of mathematicians, but it did not convince me that I would ever be able to do any more good mathematics. The Jordan-Zassanhaus paper had been a mere fluke, with only a little bit of real thinking in it. The Arnold-Lady article was, in its original form, simply a matter of noticing certain things that people should have noticed long ago, and seeing certain things that Butler had proved without ever really paying attention. And in any case, that paper, especially in its original form, was more Dave's than mine. But the Almost Completely Decomposable paper, whether or not the results were of major significance, was a real piece of research done all on my own and which proved that I was capable of finding my own direction and thinking of something that no one else had ever thought of.

 

The Discovery of Near Isomorphism

After I finished the paper on almost completely decomosable groups, the question I naturally thought of was, Can this result be extended any further? In particular, the question I wanted to address was, Do the two groups G and H have to be almost completely decomposable in order for my theorem to work?

Well, in the first place, if G is not almost completely decomposable, then there is no index i(G), so the theorem doesn't even make sense. But I thought of a way around this, and came up with a notion where one can say that two groups are nearly isomorphic. And using this notion, I was able to write what was probably the most remarkable paper of my mathematical career. (And this was written pretty much in the first year after I got my Ph.D.)

From my point of view, this was an example of research at its most enjoyable. With near isomorphism, it was almost as if the work started with the answer rather than starting with a question. It's not that the work on near isomorphism was completely easy. But I started with a concept and a theorem that I was hoping would be true, namely G and H are nearly isomorphic if and only if there exists a group L such that the direct sum of G and L is isomorphic to the direct sum of H and L: G &oplus L &asymp H &oplus L. Furthermore, from the almost-completely-decomposable paper, I had a part of the proof. I.e. I could prove that if G, H, and L were three groups, with each pair nearly isomorphic to the other, then H was isomorphic to a summand of G plus L.  I.e. G ⊕ L ≈ H ⊕ X for some group X. All I needed was to find a way of proving that X was isomorphic to L.

Unfortunately, this was not true, and the theorem I was hoping for was not true. If it had been true, then I think my theorem would have attracted quite a bit of attention, but probably less than it did. Because the theorem I eventually managed to prove was much more strange and surprising. It's not easy to describe it without using quite a bit of theory, but the idea is that if we use [G] to stand for the K-Theory element corresponding to the finite rank torsion free group G, then G and H are nearly isomorphic if and only if exists a non-zero integer n such that n([G]-[H])=0. This seems to suggest that if we use Gn to stand for the direct sum of n copies of G, then if G and H are nearly isomorphic, then Gn ≈ Hn. Unfortunately, what the theorem actually said was that there exists a finite rank torsion free group L such that Gn⊕L  ≈ Hn⊕L, and this doesn't have quite the same appeal.

A year or two later, Robert Warfield proved that indeed if G and H are nearly isomorphic, then in fact, Gn ≈ Hn. Usually when someone else publishes a paper that takes one of my results a little further, I think, "Damn! I should have thought of that." Warfield's proof was so difficult, though, that I doubt that I would ever have come up with it, even by working very hard.

Near isomorphism was a very powerful idea, but it didn't come about because I was sitting in my office with my feet up and looking through the window and hoping that I could make a major discovery. It was an excellent example of my artist friend's statement that ideas grow out other ideas.

The paper on nearly isomorphic groups was published as "Nearly isomorphic torsion free abelian groups," J. Algebra 35(1975), pp. 235 - 238.

 

Playing Hooky

When left to my own devices, I was always playing hooky, as it were, by wasting my time (as I then saw it) learning about existing mathematics that fascinated me rather than diligently working on proving new theorems of my own. I would always much rather be reading existing theorems in a book or set of printed lecture notes than, in the words of Tom Waits, to get behind a mule in the morning and plow. But I think that what I liked most of all was to reformulate and repackage existing theory, whether my own or someone else's.

But whenever I would learn something new, or discover something new, I would always be asking myself, "Is there any way this can be useful in my own work?" Or, "Is there any way these results can be extended even further?" This is something that almost all mathematicians do, although perhaps some more than others.

When I had the opportunity to spend a year at the University of Illinois, I went to as many faculty seminars in algebra as I could and sat in on a few graduate courses. (I believe that Illinois at that time had the largest mathematics department in the world except for Moscow.) Irving Reiner was giving a one-semester seminar (basically just a course he was teaching for free) on a topic called Modules over Orders, which he was then writing a book on. I wanted to learn about this, partly because a few years earlier I'd read his book with Charles Curtis which somewhat touched on this subject. This was in the realm of non-commutative ring theory, which at that time was thought of as fairly far removed from abelian group theory. But I started wondering, "Maybe there's a way I can apply some my ideas about near isomorphism to these modules that Reiner is talking about." I was always attracted to the idea of publishing a paper in a field far removed from my usual research. But it quickly became apparent that Reiner and his friends were quite a bit ahead of me, although certainly my work with torsion free groups involved a different sort of difficulties than what they did.

But then I started wondering whether I could take their results and apply them in my own field. It certainly wasn't obvious how to do this. It wasn't just a matter of using new definitions; one needed to recast everything in a new conceptual framework. But basically I could show how the two fields were essentially parallel, although as far as I can remember, neither one really enriched the other to any great extent.

In abelian group theory, one says that a subgroup of another group is quasi-equal to it if the larger group can be obtained from the smaller by adjoining a finite number of elements. Two groups are said to be quasi-isomorphic if one is isomorphic to a quasi-equal subgroup of the other. (The notion turns out to be reciprocal.) I had defined two groups to be nearly isomorphic if an additional condition holds, which in fact makes the two groups look very much alike, although not quite. The modules that Reiner was working with, on the other hand, were all finitely generated, so that the notion of quasi-isomorphism was not very powerful. For Reiner's modules, near isomophism and quasi-isomorphism both turned out to be the same as what Reiner's people called "of the same genus," which was not a very profound concept for these finitely generated modules.

I don't think I ever got any publishable results out of learning about Reiner's work, but I did succeed in making people working with torsion free abelian groups familiar with modules over orders, and this theory later became quite important in that area. (Somewhat to my annoyance, because I had never realized its potential.) A few people had never liked my phrase nearly isomorphic and for a while some people started substituting the phrase of the same genus. But this somehow wasn't really appropriate for torsion free groups, so I was glad that it never really caught on.

 

Group Rings: a Closed-Ended Question

Most of my mathematical work, especially during the first few years, was mostly a matter of desperation. All I ever really wanted was to establish a long enough list of publications in order to get tenure and preferably be promoted. When I did manage to write a paper, it was usually pretty good, but I was never much good at all in finding questions to work on. In particular, I never had the knack of reading through someone else's work and recognizing the closed-ended questions that would be worth answering and which could be answered with a reasonable amount of effort. So in desperation, I usually wound up working on open-ended questions that many other mathematicians would not even consider.

One example of a closed-ended question arose because my colleague James Brewer at Kansas had been sent a paper to referree. And the author of this paper (who had in fact been Brewer's dissertaton advisor) had constructed an example he needed by forming the group ring of a torsion free abelian group with rank two (the Pontryagin group). And he stated that the dimension of this ring was two. Brewer couldn't see why this should be true, but since he didn't know a thing about abelian group theory, he was afraid that he might be making a fool of himself if he simply sent it back to the editor to ask the author for clarification. So he came to me.

After some discussion we realized that the ring in question could not have dimension greater than two. But it also clearly could not have dimension one, because one-dimensional rings have many special properties that this one did not. So Bob's your uncle.

For Brewer, this proof was just fine, but to me it seemed a bit slipshod, for want of a better word. It seemed to me that although we had certainly got what we needed for this particular example, we had missed the important underlying principle. My feeling was that it ought to be easy to prove that the dimension of a group ring for a torsion free abelian group ought to be the same as the rank of the group. I couldn't give any good reason for this. It's just that after one has a certain amount of experience in reading mathematics and doing some oneself, one starts to get a bit of a sense of the way things ought to be. I did have a thought about chains of pure subgroups of the group corresponding to chains of prime ideals in the ring.

Brewer, knowing nothing about torsion free groups, just shrugged. My conjecture really didn't interest him at this point. But after a lot of work (much more work than I had expected), I was able to come up with a proof that he agreed was correct. I knew more about ring theory than he did about group theory, but a big part of my way of working was to make write a proof using statements that I was pretty sure ought to be true, and then he would tell me whether these were known facts or (more often) needed to be proved.

So I was now happy, but Brewer suggested that it might be worthwhile to see whether group rings of the sort we were interested in might have certain well known properties that polynomial rings have. And with a great deal of effort, we managed to prove that.

And it was at about that point that he started talking about writing our results up and submitting them for publication somewhere. Up to then, it had never occurred to me that we were writing a paper. Eventually it appeared as   J.W. Brewer, E.L. Lady, and D.L. Costa, "Prime ideals and localization in commutative group rings," J. Algebra 34(1975), pp. 300 - 308.  (Doug Costa was a graduate student who was working with Brewer and participated in our discussions.)

Sometime later, maybe about the time that we were writing up the final draft of the paper, I had an insight that seemed fascinating to me. Namely, I now saw that these group rings were a lot more like polynomial rings than we had initially realized. They were in fact very analogous to what people in complex variable theory call fractional power series. The group ring for a torsion free group of rank two would consist of polynomials in two variables, but where fractional exponents would be allowed in certain cases. For instance, if X and Y were the two variables, it might be the case that the ring would contain XY to the one-third power, but not X to the one-third power or Y to the one-third power. But this was not the sort of thing that would have interested Brewer, and it was just an idea rather than something that could be stated as a theorem, so I didn't try to include it in the paper.

When I think back on it now, though, I am pretty sure that this idea was much stronger than I realized at the time. The group ring for a group of rank r over a field, it now seems to me, is an integral extension of the ring of polynomials in r variables (but with negative exponents allowed). And that insight, I think, would have reduced about half of our paper to mere trivialities, including the original theorem about the equality of the rank of the group and the dimension of the ring. If I'd been smart enough to have noticed that from the beginning, then Brewer and I would probably never have been sufficiently motivated to write our paper. That would have been a shame, because some of the results were of genuine interest, as evidenced by the fact that the paper attracted fairly widespread interest.

I think that a part of my success in being able to write this paper was due to the fact that year or two before I had done quite a bit of work with group rings, but without ever being able to prove anything really worthwhile. And I had worked my way through one book on the subject. It wasn't that anything I knew from this was specifically useful in my work with Brewer. It's just that in a certain sense I knew the territory, and so I had some idea of where one ought to try to go and what landmarks one should look for. And I had the desire to finally do some significant work on this subject where I had previously struck out.

 

An Idea That Should Have Been Good But Wasn't

The part of algebra that really attracted me had always been not abelian group theory but commutative ring theory. In fact, I had read a large part of the classic text by Zariski and Samuels while I was still an undergraduate. During the year I spend at the University of Illinois (a year after getting my Ph.D), I got to know the very good commutative ring theorists Phil Griffith, Robert Fossum, and Graham Evans. Phil Griffith in particular was quite friendly to me, because his own background, like mine, was in abelian group theory. He had read my almost completely decomposable paper and had been impressed by it. But somehow, although I went to a number of talks on commutative ring theory, I always remained a hanger on in the commutative ring crowd. As mentioned earlier, I learned a great deal of Irving Reiner's theory of modules over orders, and by sitting in on a course by John Gray I learned an enormous amount of category theory and topos theory. But I didn't get much involved in the commutative ring theory being done at Illinois at all.

The two joint papers I later wrote with Jim Brewer at Kansas were quite decent work, but nobody could call them major advances in commutative ring theory. And they involved a type of ring theory that was not then especially fashionable and which I didn't much care for, namely the sort of development which stemmed from the work of Brewer's dissertation advisor, Robert Gilmer, and which emphasized non-noetherian rings.

One idea I particularly wanted to follow up on was to take the theory of torsion free abelian groups and apply it to modules over commutative rings. Of course "everybody knew" that this worked for Dedekind domains, because a Dedekind domain is a kind of commutative ring which is not much different from the integers. But I was hoping to make this work for a wider class of integral domains.

I found a couple of journal articles in the library on torsion free modules over integral domains, and they didn't get very far. But it occurred to me that their mistake was in looking at torsion free modules rather than flat modules, which are, one might say, torsion free and more, and are very important in commutative ring theory.

At Illinois while I was there, people had talked a lot about Krull domains, a kind of integral domain that in some ways is like a dedekind domain, but with larger dimension. And after I went back to Kansas, it occurred to me that the notion of height, which is so important in abelian group theory, worked perfectly well in Krull domains. So for the first and only time in my life, it seemed as if I had a really good mathematical idea. I.e. not an idea that developed out of a process of a lot of very discouraging hard work, but one which I knew from the very beginning could be used to write a successful paper. Since the Arnold-Lady paper had shown how to do most of the basics in the theory of torsion free groups using only principles in general algebra, I was now pretty sure that we could now take all of the fundamental theorems for torsion free groups and prove them for flat modules over Krull domains.

I started having a grandiose fantasy about the development that would arise from this. I would see a long-standing dream come to fruition, where a large part part of abelian group theory and commutative ring theory would merge. Commutative ring theorists would feel compelled to become familiar with a lot of abelian group theory, and abelian group theorists would feel obliged to expand their knowledge of commutative rings.

In fact, though, things didn't work out that way at all. Rank-one flat modules (i.e. flat submodules of the quotient field of the ring) could indeed be classified exactly in the way that rank-one abelian groups are, but all the other theorems I had hoped for (Baer's Lemma and such) failed. It wasn't just that I was unable to prove them, but I was actually able to construct counterexamples showing that they were wrong.

I did manage to write a publishable paper, but in my judgment (and pretty much everybody else's, as far as I can tell) it was mostly a flop. "Completely decomposable flat modules over locally factorial domains," Proc. Amer. Math. Soc. 54(1976), pp. 27 - 31.

 

Splitting Fields: A Blue-sky Idea

I never really developed the skill of finding closed-ended questions to work on. So most of my mathematical work consider of investigating open-ended questions, and I found the process hell.

For one thing, in the case of a closed-ended question like the one Brewer and I were addressing in our group-ring paper, one can be fairly assured that any competant algebraist will eventually find the answer if he works on it long enough and hard enough. But with an open-ended question, one doesn't know what one is looking for or whether there's anything worth looking for at all.

On the other hand, with some major exceptions, one is much more likely to come up with something remarkable when one is investigating a open-ended question. (The alternative is to investigate some very famous unsolved problem. If someone proves the Riemann Hypothesis, for instance, then there will be no lack of admiration for him. But no one without tenure should choose this path.)

In my last year or so at the University of Kansas, I was as usual in a state of desperation because I had no idea of where I was going to find a new research topic.

Playing hooky as usual, I started reading some extremely moldy old stuff that I'd sort of looked at before but never really learned: namely the classic theory of skew fields (division algebras). This is something so old that, as far as I can tell, almost nobody thinks about it any more. And one thing that caught my interest was the theorem that the dimension of a skew field over its center is a perfect square. Certainly I'd once read through the proof of that, but I couldn't remember it any more, so I had to look it up. It turned out to be related to the idea of the splitting field for a skew field. If one does an extension of scalars by the splitting field, then the new ring turns out to be the ring of n by n matrices over the splitting field. Making it obvious then that the dimension is n2.

I wondered whether it might be possible that a finite rank torsion free group might also have a splitting field. This was a blue-sky idea if ever there was one. But at least I was able to write a paper that was accepted by the Journal of Algebra. ("Splitting fields for torsion free modules over discrete valuation rings I," J. Algebra 49(1977), pp. 261 - 275.) It was certainly not the greatest paper I'd ever written, and I later found Dave Arnold and other luminaries lukewarm in their enthusiasm for it.

But then, at about the time of my last two years at the University of Hawaii, a number of different things started coming together. I can no longer remember them quite in sequence.

For one thing, while I was still in Kansas, I played hooky in one of the most drastic ways I ever had, by starting to read a series of papers by Maurice Auslander. These papers were the worst possible case of a blue sky idea. Auslander had developed an extremely complicated and outlandish functor-oriented approach to modules over a finite-dimensional algebra. Furthermore, his papers were almost impossible to read, because at each crucial point he would refer back to a previous paper, never with any brief summary of what the theorem or definition he was referring to was, and often without giving a specific theorem number and certainly never a specific page number. So one wound up pouring through an extremely long and difficult paper (all his papers were extremely long and difficult), trying to find the result he was referring to.

But his work fascinated me, despite the fact that while developing a huge mass of very strange concepts, he never seemed to get any actual results.

But then something had happened. He had acquired as co-author a young Scandanavian (or possibly German) woman named Idun Reiten, who often spent time at the University of Illinois but who I had never managed to meet during the year I was there. (Auslander himself was at an eastern university, Brandeis I believe.) And the Auslander-Reiten papers suddenly started to show that all the weird Auslander machinery could actually be used to obtain important new results about finite-dimensional algebras.

So I started wondering, as usual, "Is there any way that any of this strange stuff could be of any use to me?"

And strangely enough, I did manage to find a way to use it. Auslander and Reiten had defined a pair of functors, not very complicated in themselves, which could be combined to yield a functor DTr, which produced new indecomposable modules from old ones. And I figured out how I could do the same thing with torsion free abelian groups in some cases. In this way, one could produce a sequence of strongly indecomposable groups of ever-increasing rank. And to me, this seemed quite good, because there didn't seem to be a good supply of specific examples of torsion free groups lying around, except for obvious ones like almost completely decomposable groups.

At somewhat the same time, I had decided that it was finally time for me to act like a responsible citizen of the torsion-free-abelian-group community and really understand the proof of the Kurosh Matrix Theorem. Kurosh was the most prestigious of the older cadre of Russian algebraists, and he had long ago figured out a way to represent torsion free groups by means of matrices with entries from the ring of p-adic numbers. Just about every leading American torsion-free group theorist had told me that Kurosh's approach was useless, because although one could describe a group, the matrices didn't enable one to determine any of its properties, even such a basic one as whether it was indecomposable. Joe Rotman (who had been Dave Arnold's dissertation advisor) was the only American mathematician who seemed to find the Kurosh matrices worth paying attention to. But in Russia, because Kurosh was such a powerful figure, use of the Kurosh approach was obligatory for anyone doing research on torsion free groups. (Which maybe explains why there didn't seem to be much of any good Russian research on the subject.)

Anyway, I worked my way through the proof of the Kurosh Theorem. And much later I was to realize that what Kurosh had done was essentially the same as the approach later used in a quite well known paper by Beaumont and Pierce. Pierce was quite certainly not aware of the connection of his work with Kurosh's, and in fact he was one of the American mathematicians who told me emphatically that the Kurosh approach was not useful. In any case, the Beaumont-Pierce theory was the key to my own work on splitting fields, which was at that point still not very exciting.

Also, in my efforts to be a responsible citizen, I finally started working my way through Dave Arnold's dissertation, which defined a duality for torsion-free groups within the context of quasi-isomorphism. This was not my kind of paper at all. The approach was largely computational rather than conceptual, using the framework of the Kurosh matrices. "Pedestrian in a difficult way" might have summed up my judgement of it. But the result, a duality for torsion free groups, was something that looked like it ought to have potential, although at this point the potential had not been much realized.

But somehow in the process of wasting my time by working so hard at understanding stuff which could be of no conceivable use to me, I realized that Dave Arnold's duality could be derived homologically in the context of the Auslander-Reiten theory, thus describing it in a conceptual way and eliminating the accursed matrices.

And out of this, by some process which still seems to me like a miracle, I was able to answer a question which I had been thinking about ever since I was a graduate student. Namely, while I had been taking Arnold's course on torsion free groups, he had suggested the problem of determining the divisible subgroup of the tensor product of two groups. This had initially captured my interest because it initially seemed like such an absurdly easy question that I thought that surely I could quickly find the answer. But over a period of several years, I had never managed to really make any progress at all on it.

But now, by using my interpretation of torsion free groups in the Auslander-Reiten framework, plus a theorem about modules over finite-dimensional algebras in a paper by Butler and his wife Sheila Brenner, I was able to come up with a grand identity which put together Dave Arnold's duality and the divisible subgroup of the tensor product. It was published as  "Relations between Hom, Ext, and tensor product for certain categories of modules over dedekind domains," Lecture Notes in Mathematics 874(1981), pp. 53 - 61.

And all this had been accomplished without much of any real work on my part. Simply a matter of stealing other people's results and seeing how they fit together. (This sort of theft is perfectly acceptable, even commendable in mathematics, but only if one fully acknowledges the sources one is using. And as far as my not having done much work, well, certainly there was an enormous amount of work involved in reading all those damned papers. But I hadn't had to do a lot of work actually proving things.)

Another thing I did in my efforts to become a responsible citizen was to finally put in the effort needed to understand a famous theorem by Tony Corner, to the effect that every finite rank torsion free ring (with a few obvious exceptions) is the endomorphism ring of some torsion free group. This is I think one of the best known theorems in the field, but the proof had always looked ugly to me and I had never seen any good reason to understand it.

As it turned out, it was not so much that the proof was ugly. But it used a few things that I had not been aware of and which took some effort for me to prove. For one thing, it used the fact that a mapping from a reduced module over the p-adic integers into itself is automatically linear over the p-adic integers. I suppose that this is fairly easy to see from a topological point of view, but at the moment that didn't occur to me, and I constructed a much more pedestrian nuts and bolts proof.

And by doing so, I realized that this result didn't depend on the fact that the ring of p-adic integers is complete in its topology. It only depends on the fact that its p-rank is 1, i.e. that R/pR is isomorphic to Z/pZ. And this meant that almost everything which one normally used the p-adic integers for could actually be done using the rings in my splitting fields. And my whole theory of splitting fields started to become a theory of splitting rings.

The thing is that the Corner Theorem and the Kurosh Theorem and Dave Arnold's Duality all depend on the idea of extending scalars so that one is dealing with modules over the p-adic integers instead of over the ordinary integers (or rather a localization of the integers at some prime number p) and then using the information gained to draw conclusions about the original groups. It is a form of descent, at least as I understand the word descent. (This is implicit in the development of Arnold Duality because of the use of Kurosh matrices.) But a drawback to this whole approach is that the ring of p-adic integers is not itself an object in the category one is working with, since it does not have finite rank. But now, if one looks at the category of p-local groups which are split by some finite-dimensional splitting field, and lets R be the intersection of that splitting field with the p-adic integers, then R itself belongs to that category and one can also use R for all the purposes that one would traditionally use the p-adic integers for, as long as one is looking at groups split by that splitting field.

And now by using the concept of splitting rings I could finally throw all the Auslander-Reiten homological stuff overboard and define Arnold Duality and give my determination of the divisible subgroups of the tensor product in a much more straightfoward way. The only fly in the ointment was that I still needed Auslander's work to get his functors DTr and TrD, which I was convinced would turn out to be extremely useful.

Well, all this is now getting way too technical. But the main point is the way that a lot of diverse pieces, all of which I set about learning with no clear purpose in mind, suddenly came together in a remarkably coherent manner. Reviewing all this work now, it seems to me that it really clarifies the difference between my approach to mathematics and that of more prolific mathematicians such as Brewer or a mathematician at Hawaii I later co-authored a paper with, Adolf Mader. The way of finding ideas used by these mathematians was to look through very recent papers, preferably ones that had not year appeared in print (since almost all mathematicians send out copies of anything they write to their colleagues in the same subject area), looking for questions that are still open and which seem tractable. Whereas what I seemed to do for the most part was to look through articles that were often somewhat older (although the Reiner-Jacobinski work on modules over orders and the Auslander-Dlab-Ringel work mentioned below were fairly recent), often in dealing with topics somewhat diverse from my own work, which contained ideas that were really interesting to me. (I don't think I ever found anything useful in an unpublished paper that someone sent me in the mail, although sometimes I would admire the work.) And then I would constantly ask myself, "Is there any way that I can find a connection between these articles and the stuff I do?" It's an approach that shouldn't have worked, and I still can't figure out how much of my success was due to sheer blind luck.

I do think that there is an implication here for the study of creativity in general. I've known a number of successful writers, and I'm always interested in listening to creative people in any field talk about their work. And it seems to me that highly creative people almost always have a very wide range of interests.

This is one reason why I was always bothered by the extremely small graduate programs at some of the universities I've been associated with. Some of these schools apparently sometimes graduated some very good Ph.D's, who were fairly expert in their own sub-sub-specialty. But I didn't see how these students could know very much mathematics in general, since they had never had the opportunity of take anything beyond the most basic graduate courses.

 

Pontryagin Groups

Before I go on to talk about the paper I wrote for a conference in Rome, let me try to explain a little bit more about the concepts I was working with.

I have already mentioned that finite rank torsion free abelian groups can be seen as consisting of finite-dimensional vectors (or arrays) where the entries are rational numbers. There are basically two things going on that determine the shape of such a group, although certainly it's possible for groups to have a mixture of these two phenomena. (Let me say again that this is not a tutorial. I just want to give a few readers some general feeling for what the subject is like, without necessarily being 100% accurate all the time.)

On the one hand, there are Butler groups, which are shaped by what are called types. Basically a type corresponds more or less to an infinite sequence of denominators which occur at certain positions in the group. This set of denominators might consist of a sequence of higher and higher powers of a prime number or combination of primes. For instance, a group of rank two might contain elements of the form,

(2/5, 1/5),    (2/25, 1/25),    (2/125, 1/125),  (2/625, 1/625)   etc.
Each vector in this sequence is 1/5 the preceding vector. The fact that there is an infinite sequence of vectors determines the shape of the group. Of course this is an extremely simple example; usually the rank (dimension) of the group would be larger and there might be numerous sequences like this corresponding to different positions within the vectors.

It's also possible that the denominators in question, rather than being powers of a single number, consist of products of more and more primes. For instance, instead of the sequence of elements given above, we might have

(2/2, 1/2),   (2/6, 1/6),   (2/42, 1/42),    (2/464, 1/464),    etc.
Here the denominators are products of larger and larger sets of primes. In other words, each new denominator is obtained from the previous one by multiplying by a prime number. (Actually, the fact that the numbers are prime is not really very important, and the fact that they're all different is also not essential, but it does make the situation easier to understand.) A particular group may have examples of both kinds. (For future reference, I will mention that the first example shows an example of an idempotent type, whereas the second is a locally free type. Since all this is much too technical anyway, I won't try to define these terms, although I will say more about them later.)

In contrast to this Butler pattern of a sequence of larger and larger denominators at some fixed position within the vectors, there is another phenomenon possible where the denominators keep getting larger, but at the same time the vectors also slide over horizontally, as it were. For instance, as a contrast to the first example above, we might have a group whose shape is determined by the sequence of elements,

(1/5, 2/5),    (11/25, 17/25),     (86/125, 92/125),    (211/625, 592/625),    etc.
As in the first example, the denominators here keep increasing by multiples of 5. But now the numerators are also sliding over, as it were, but not in a completely arbitrary way. In case the pattern is not completely clear (as it's probably not), what's happening is that the numerator for each new fraction in the series is obtained by adding some multiple of the previous denominator (or possibly 0) to the previous numerator. (For instance, 11 = 1 + 2*5,    86 = 11 + 3*25,    211 = 86 + 1*125.) A key point is that each fraction in the sequence can be obtained from the following one by multiplying the fraction by 5 and subtracting (or adding) an integer. (For instance, 17/25 = (5*92/125) - 3.) The importance of this is that if we assume that the group already contains all vectors whose coordinates are all integers (which will usually be the case in examples we create, and in any case can be contrived by a change in coordinates), then we can leave out the first ten of these strange vectors with increasing denominators, or even the first hundred, because they can all be derived from the ones that follow. So what we're seeing is that the group being constructed is obtained as a direct limit.

If you squint in the right way while looking at them, this example and the Butler group example seem almost constructed in the same way. In both cases, the group is determined by a set of vectors divided by higher and higher powers of 5, but in the first case this vector is constant and in the present case it keeps sliding over. What is needed in order for them to be seen as two manifestations of the same phenomenon is for the sliding-over vectors in this group to converge in some weird sense to some kind of limit. But to see how that could be possible would require talking about the p-adic numbers (5-adic numbers, in this case), which I am not about to do. But I will mention for the cognoscienti that it's important that the p-adic limits of the numerators in the two coordinates have a ratio that is irrational.

In any case, this "sliding over" construction yields the sort of groups that the Kurosh-Beaumont-Pierce theory is really needed for and the sort of groups (slightly modified) at the heart of my work on splitting fields.

Let me also mention that this type of construction is examplified in one of the oldest known interesting examples of a finite rank torsion free abelian group --- the Pontryagin group. In some ways, these groups are really the opposite of Butler groups. There doesn't seem to be any good name for them, so I would suggest calling them (after more carefully defining them) Pontryagin groups.

 

The Rome Paper

At about the time that I arrived at the University of Hawaii (1977), I received an invitation to a conference on abelian groups in Rome, and was asked to submit a paper for the conference proceedings. This paper would be accepted without being refereed, so it was a wonderful opportunity for me to present things without worrying about some editor objecting about my doing things in my own way.

The only problem was ....   The usual problem, namely that I had no new results that hadn't already been published, and no real idea of anything to work on. But recently certain thoughts had been rattling around in my head, and I thought that maybe if I fooled around with them long enough I might be able to spin something publishable out of them. Spinning gold out of straw, as it were. I least I could write them all down without complaints from some editor that certain theorems were not completely new.

I had started thinking that Butler groups could be fit into the new splitting ring paradigm that I was developing. In fact, if one looked at the category of Butler groups corresponding to some finite set of idempotent types, then this turned out to be the class of groups split by a splitting ring which was a product of certain subrings of the rational numbers. (The concept of an idempotent type is illustrated in an example above, and will be defined a little better below, when I talk about tensor products).

And as I started trying to figure out how to present this coherently, I realized that my whole way of thinking about splitting rings (rather than splitting fields, as I had done in my earlier papers) had almost completely changed.

Among other things, in this paper I expanded upon a theorem I had proved in a paper for an earlier conference, namely that the category of quasi-isomorphisms of groups split by some finite-dimensional splitting ring is equivalent to the category of modules over a certain finite-dimensional algebra. Now in my "Rome paper" (as I came to call it) I proved that one could also prove that the category of such groups with respect to homomorphisms (rather than quasi-homomorphisms) could be shown to be equivalent to a category of finitely generated modules over a certain noetherian ring. I realize that almost no one will be following all these technicalities, but let me just say that it was a much more powerful theorem.

I would probably never have worked out of this out in detail if I hadn't been given carte blanche by the Rome conference to publish a paper written in whatever way I chose. But by the time I finished writing up the paper, I realized that it was much stronger than I had expected it would be, and I had some regrets at not having it published in the Journal of Algebra or some other refereed journal. (It was published as "Extension of Scalars for Torsion Free Modules over Dedekind Domains", in Symposia Mathematica 23(1979), pp. 287--305.)

Dlab, Ringel, and Quivers

Around this time, I embarked on a really major hooky-playing episode. Except that this time, after my experience with learning about Reiner's work and the Auslander theory, I actually had some suspicion that the new stuff might be of some use to me after all. But it was so exciting that I would have gone ahead and put in the effort to learn about it in any case.

What happened was that at a conference on commutative rings, somebody told me that recently certain people had developed a classification theory for finitely generated modules over finite dimensional algebras (a kind of non-commutative ring). Now as mentioned above, as part of my splitting ring work I had proved a theorem showing that certain categories of finite rank torsion free groups with respect of quasi-homormorphisms were equivalent to categories of modules over finite-dimensional algebras. So it was not too much of a gamble to assume that this new classification theory might have some value in my work.

It took me quite a while to chase down the article by these two guys, who turned out to be a Canadian named Dlab and a German named Ringel. And then when I looked at it, it looked like it would be a quite formidable challenge to read it. For one thing, I noticed that they were using Dynkin diagrams, something that nobody in abelian group theory had ever had any occasion to invoke. But I did have some familiarity with Dynkin diagrams, because of having once sat in on a course in Lie algebras. In any case, that particular aspect of the paper was not as intimidating as it looked.

In any case, I was able to extract what I needed from their paper. And it completely transformed what I had done in the Rome paper. It didn't invalidate it, but it did give a whole newer and to some extent cleaner way of looking at it.

For one thing, Dlab and Ringel had defined what they called Coxeter functors, which turned out (in the cases I cared about) to be the same as Auslander's functors  DTr and TrD. So now I could completely free myself from the Auslander-Reiten work. (I actually really liked the Auslander-Reiten papers a lot. But the problem I had was that I didn't think I would ever get other people working in torsion free groups to put in the enormous effort required to read them.) More important, I realized that the category of quasi-homormophisms of Butler groups with a specified set of types was equivalent to the category of representations of what Dlab and Ringel called a quiver. And this certainly gave one an improved method for constructing Butler groups and of seeing to what extent a given class of Butler groups might be classifiable.

In one way it might have been better if my Rome paper had been refereed, because I later learned that Butler himself had published results that had anticipated my insight on the use of quivers as regards Butler groups. My not having known of Butler's paper was an example of inexcusable laziness on my part. Certainly I would have learned of it if I'd actually gone to the Rome conference, where I'm sure I would have encountered Butler. But at this point I had just moved to Hawaii from Kansas and my life at was simply too chaotic to be taking a trip to Europe, where I'd never been before. (Especially Rome, which I'd always heard was full of thieves, which in fact it is.)

To my relief, though (I guess), only one person ever made a remark to me about Butler's paper, and he informed me about it rather gently.

Seeing Butler groups in terms of quivers was like my paper on the application of height sequences for classifing flat submodules of the quotient field of a Krull domain, in that it was a "good idea." Meaning an idea where the theorems are definitely worthwhile, but fairly obvious, given the original idea, and very little work is involved in finding the proofs. These two papers were both examples of research that started with the answer rather than starting with the question. I should have suspected that somebody would have scooped me on the use of quivers.

 

The Social Aspect of Research

I was fairly fortunate during my first couple of years in Hawaii in that some rather distinguished visitors with a lot of familiarity with abelian group theory came for extended stays. Consequently, I was able to give a sequence of extended talks about my recent work. (The regular algebraists at Hawaii also attended and listened attentively, despite their lack of familiarity with abelian group theory.)

Like novel writing or oil painting or many other arts, mathematics is basically a solitary occupation. And yet at the same time, there's a social aspect to it that for many mathematicians is very important. One needs an audience, beyond the hypothetical group of readers many years in the future that one's paper is written for. The process of giving talks on my ideas was extremely helpful in encouraging me to organize them and improve the presentation I was offering. Hawaii, in this respect, was much more useful than Kansas had been. Because in Hawaii, the algebraists actually wanted to understand what I was saying.

Somehow I am reminded of an incident at UCSD when I was a graduate student there. A friend and I were having a conversation in the hall, and since we had quite a bit to say, we went into an empty classroom and sat down. And then after a while other people started coming into the classroom and sitting at the desks. And then a speaker arrived and began a talk, and my friend and I realized that we were trapped, so we shut up and listened. Nothing the speaker said made a bit of sense to us, but we looked at each other and shrugged. At the end of the talk, everybody applauded, and so we applauded as well, despite the fact that we had not the faintest idea what the talk had been about.

This to me seems typical of the seminars at Kansas, at least in algebra. But Hawaii, at least for the first couple of years I was there, was different.

At this point I was realizing, as have explained many too many times elsewhere on my web site, that I would not be able to survive much longer financially as an algebraist, so I pretty much took it for granted that I was in my last years as a mathematician. I no longer put much effort into proving new theorems (although I did in fact prove a few), but mostly concentrated on explaining the way I saw the whole field of torsion free groups, starting with the basics (as reformulated in the Arnold-Lady paper), then moving on to Butler groups and some classical topics where I had never proved any new results myself, such as torsion free rings, but which I saw as being much more central to the whole theory than had been earlier realized and which I recast in the paradigms of commutative ring theory. It had always somewhat bugged me that abelian group theorists and commutative ring theorists used such different vocabularies that it was often not evident that they were talking about the same things. This was especially puzzling since one of the patriarchs of abelian group theory, Kaplansky, had made major contributions to both fields, and had been the first to point out that all the theorems on abelian groups work equally well for modules over principal ideal domains. (Actually, as Fred Richman pointed out to me, he had exaggerated slightly in this respect.)

 

A Summing-Up Seminar

About a year before I took my first sabbatical, at U.C. Berkeley, Adolf Mader, my fellow abelian group theorist at Hawaii, arranged for a conference on abelian groups to be held in Honolulu. For me, this was an example of extremely bad timing, because my plan at that time was to look for a non-academic job the following year while I was at Berkeley and then leave the world of abelian groups forever. Although at that point I was still surviving economically, a look at the graph of recent inflation (as of 1983) and comparing it to the list of raises and non-raises UH had recently given its faculty made it apparent that continuing to devote myself to mathematical research would soon no longer be economically feasible. So I made a major mental readjustment.

In any case, that conference marked the end of my period of focussing on splitting rings and splitting fields for finite rank torsion free abelian groups, a period that had extended roughly over the years 1975 -- 1983. Looking back over the work of that period now, it seems like my research was a process of successive approximations. First there was the basic paper on splitting fields, which was in itself not of great importance but came about just because of my wondering whether the concept of a splitting field for a finite dimensional skewfield (division algebra) might be imitated for finite rank torsion free abelian groups. The next step was the use of the Arnold Trick to show that the category of groups which had a specific finite-dimensional splitting field was isomorphic to the category of modules over a specific finite-dimensional algebra. That was published in the conference proceedings for a conference held at New Mexico State in 1977. Then there was discovery of the Auslander-Reiten work, and the realization that it was something that could be used for my own purposes.

The next step was my "Rome paper," written in 1978, extending the concept of splitting field to that of a splitting ring, and showing that the class of groups which have splitting rings also included many Butler groups. Then came my discovery of the Dlab-Ringel work, leading to a couple papers which I would just as soon never have published, which were pretty much merely a matter of translating Dlab and Ringel's results into the torsion free groups environment.

Finally there was the use of a Brenner-Butler paper to show the relationship between Arnold duality and the divisible subgroup of the tensor product of two groups.

Certainly there was a fair amount of hard work in all this. In particular, the Splitting Fields II paper involved an enormous amount of extremely hard work --- work which might just as well never have been done. But for the most part, my research on splitting rings and splitting fields was just a matter of stealing and repackaging other people's theorems. It was important mostly because it made the abelian groups community aware of the existing work done by these other people, not because my own contributions were of major importance.

I think that the one really major original contribution made by my research during my last years in Kansas and my time in Hawaii was my development of Arnold Duality. As originally defined in Dave Arnold's thesis in terms of Kurosh matrices, it was an interesting little oddity, helpful in a few situations but not indispensible in them. By getting rid of the matrices and seeing it in a more functorial form, I converted it into a more natural tool. Then gradually, through the succession of papers on the tensor product, it became apparent that it plays an essential role in the theory of torsion free groups.

My other major contribution, in my opinion, was simply a matter of tying all the diverse pieces together. This included not only my own work, but all the research by other people which I saw as essential to the theory as a whole. This included the concept of quasi-isomorphism as seen from the point of view of category theory, as well as Butler's work, and the work of Beaumont and Pierce on torsion free rings, most of which went back to before my own involvement in the field. In their publications, Beaumont and Pierce had seemed to present torsion free rings (i.e. those torsion free groups on which a multiplication is also defined, making them into rings) as a very specialized topic, just as Fuchs had in his textbook. But in the seminars I gave during my first three years at Hawaii, I showed how these groups, especially the integral domains, were in fact central to the theory of finite rank torsion free groups as a whole. I also showed how the conceptual framework Pierce had developed for torsion free integral domains fit into classical commutative ring theory. I had really added nothing new, but by changing the language I had made it seem different.

Eventually I wrote up the seminars I had given I had at Hawaii and published them in as part of the proceedings of the Honolulu conference. ("A seminar on splitting rings for torsion free modules over dedekind domains," in Abelian Group Theory, Lecture Notes in Mathematics 1006 (1983), pp.1 - 48.) If I had left the academic world after my sabbatical as I had expected, I could in fact have had a sense of satisfaction that I had reshaped the field of finite rank torsion free groups.

But I didn't in fact leave the University of Hawaii. Legally, I needed to stay on for a year after my sabbatical, and by the end of that year, the economic situation was getting rapidly better. So I took the coward's way out and did what so many UH faculty in those days were doing, namely to remain physically present but to be absent in spirit.

A Leap Without Faith

Several years later, one evening the chairman of the Math Department, who was a very good friend, gave me a ride home from Anna Bannana's and suggested in a very friendly way that even though I was now a tenured full professor, it would still be a very good idea for me to continue to do a little research. I'd sort of been thinking along those lines myself, so I buckled down and wrote one not very spectacular paper which was actually accepted by the Journal of Algebra, and then decided that it would be good to try and do another major piece of research on tensor products.

I had earlier written a fairly reasonable paper on tensor products of Butler groups whose elements all had idempotent types. By way of clarification, a type in a torsion free group (which I described earlier in terms of height sequences) corresponds to a rank-one subgroup of the group. An idempotent type is one corresponding to a subring A of the ring of integers. If  A is such a subring, then the tensor product of A with itself is the ring A itself. I.e. the type t(A) is its own square. Since a Butler group is generated by finitely many rank-one subgroups, if all these subgroups have idempotent type, then this gives one a toe-hold as far as investigating tensor products.

The older terminology for an idempotent type was the rather stupid and illogical term non-nil type. I think that I was not the first person to call them idempotent types, but I think that I was certainly the one to popularize the term.

Now I thought that maybe I could get somewhere by studying tensor products of Butler groups where the types were the opposite of idempotent. The accepted terminology at the time for types of this sort was "a type whose height sequence doesn't have any infinities." It occurred to me that if I were going to write a whole paper involving these things, then that term might be a little cumbersome. It thought that the logical term might be "locally trivial type," but I didn't really care much for that, and I didn't think it would sit well with a lot of people, so I chose the second best term "locally free type."

Butler groups with locally free types are in fact the locally free Butler groups, i.e. those whose localizations at prime numbers are all free. On the other hand, Butler groups whose types are all idempotent are exactly those which are "quotient divisible," a term introduced by Beaumont and Pierce. Quotient divisible groups are pretty much the ones for which my splitting rings concepts were applicable.

Whereas idempotent types are characterized by the property that t2=t, locally free types have the property that t2 is strictly greater than t. (But not all types with that property are locally free.) But I couldn't see any way of using this to get insight into tensor products of locally free Butler groups.

Tensor products have to do with the possibility of multiplications on a group, or multiplication between two different groups, although the results of the multiplication most often lie in some third group. Although it's often not very difficult to prove general theorems about tensor products, in a lot cases it not very easy to see what a tensor product of two specific groups actually looks like.

One way of getting started in studying tensor products is to look for examples of groups or pairs of groups where there actually does exist a naturally occuring product. An obvious example is the case of a ring, which by definition is a structure with both addition, subtraction, and multiplication (satisfying familiar algebraic rules). In other words, a ring is precisely a group on which a multiplication is defined (satisfying the rules). Thus if R is a ring, there is a mapping from R ⊗ R into (in fact onto) R. Furthermore, it was proved by Beaumont & Pierce that torsion free rings are always quotient divisible. Rings are well known and have been widely studied, and consideration of rings had been a big help in my study of tensor products of quotient divisible groups.

Locally free groups are in a way the opposite of quotient divisible groups. In particular, rings are never locally free, except for a few extremely simple and uninteresting cases. So this entryway was not available in my study of locally free groups.

In fact, I had a hard time thinking of any examples of locally free groups on which a multiplication was naturally defined. But finally, a situation from linear algebra suggested an analogy. This was something well known but which has been usually not given much attention, except in differential geometry. Namely, if V is a vector space, then there exists a dual space V*. And V*⊗ V is isomorphic to the set of linear transformation from V into itself, which is basically just the set of n by n matrices with entries in the scalar field, where n is the dimension of V. (The tensor product here is taken over the scalar field.) And there is a mapping from V*⊗ V into the scalar field which takes an element f⊗v to f(v). This is the mapping that sends a matrix to the trace of that matrix.

To apply this idea to tensor products of locally free groups, I needed something analogous to the dual V* of a vector space. In other words, I needed a duality functor. Arnold duality would not work. Dave Arnold had originally defined his duality for p-local groups, then later extended it to quotient divisible groups by taking the intersection of the duals at all primes. For locally free groups, this is obviously not going work at all. But in Robert Warfield's 1968 paper, a very simple contravarient functor is discussed which is defined on the category of locally free groups G all of whose types are no greater than t(A), for some specified rank-one locally free group A, as W(G)=Hom(G, A). It is also a locally free group and W(W(G)) ≈ G. Thus it is a duality, and I decided that it should be called the Warfield dual. (Duh...) It is quite evident from the definition (for those familiar with the concepts) that there is a mapping from W(G) ⊗G into A. Using the linear algebra theorem from the paragraph above, I was able to show that this mapping is quasi-split, i.e. that the rank-one group A is isomorphic to quasi-direct summand of W(G) ⊗ G.

All of this is certainly very technical. But the point is that somewhere in this process, the assumption that G is a Butler group had ceased to be relevant. So I now seemed to be conducting an investigation into tensor products of locally free groups in general.

I have mentioned that mathematical research is often a process of making guesses and then checking them out. And at this point, I decided to investigate a hypothesis that was so absurd that only a fool could have expected it to be true. Namely, I wondered: Is it possible that the statement (theorem) just given is reversible? Namely, suppose we conjecture that if G and H are strongly indecomposable groups, then H ⊗ G has a quasi-summand with rank 1 if and only if H is the Warfield dual of G.

It's not that I was actually crazy enough to believe that this conjecture was true. I had no evidence whatsoever to suspect that the "only if" part of it might have any validity. But I was lost in the wilderness without a compass with this problem. And when you're in that situation, often trying to prove that a certain statement is true gives you a consistent direction to move in, and you very often learn something whether or not you succeed.

And in this case, I succeeded. Much to my surprise. After a length of time that I'm sure must have been months, I had many extremely long calculations done, partly in my head and partly on scraps of paper. And I was convinced that my thinking was correct. What I was not convinced of, though, was that I would ever be able to explain it in a way that would make sense to anyone else.

But what happened next was not so unusual, only a bit more extreme then usual. Once you have a proof for a theorem, then you start to boil it down; to make it more concise, to make it clearer, to see places where you can invoke standard theorems to cut short the long explanations.

I presented this theorem at a conference in Colorado. I had sent out several copies in advance, of course, and some people who had got those copies had actually read them. But at least half the people in the audience had not seen the proof yet, and at least a few of them were actually following what I was saying. And I was pleased to notice that when I got to the end of the proof, there were a few gasps from listeners. I don't even remember the proof now, unfortunately, so I'd have to go to a good university library to read the journal article, if I really cared. But it was an amazing job, if I do say myself, of putting together a number of incredibly diverse pieces, some of which were not by any means the usual suspects for a proof like this.

Theorems I Have Proved

If you ask me how to prove mathematical theorems, I would be tempted to follow Einstein's example and say, "I wouldn't know. I've only proved a few theorems in my whole life." But that wouldn't be completely true. Like any mathematician, I've proved lots of theorems in my day. But there were only a few major theorems in my career whose proof as such was a remarkable accomplishment.

There was my dissertation. The proof was remarkable and was the whole point of the paper, as far as I'm concerned. The theorem itself never seemed to me of much great value.

There was one really remarkable theorem in the Arnold-Lady paper. I never presented it to an audience of my peers, but I know that there had to have been a few gasps of admiration when people received the pre-print copies I mailed out, because of the way it combined category-theory arguments with some extremely antique pieces of non-commutative ring theory.

There was the almost completely decomposable groups paper. There were a couple of really good proofs in that paper.

The paper on nearly isomorphic groups was not a piece of cake, by any means. Proving theorems is never a piece of cake, no matter how simple the proof seems once it's done. And I remember that there was one bit of thinking in the proof that Warfield especially admired. But really, the difficult part of that paper was finding that theorem in the first place, not proving it. (I think that mostly it was one of those cases where one proves the theorem first, then realizes what one has proved.)

The paper I wrote with Jim Brewer on group rings was one I sweat quite a bit over, but as I've said, I think now that most of the theorems could have been proved much more simply than I did. Except for one especially long and difficult proof, which I think I'm justified in being proud of (although I think that this is another case where the theorem itself was not important enough to justify all the work required to prove it). It was in fact that only one that people generally gave me credit for; they assumed that since I was an abelian group theorist, the other proofs in the paper must have been mostly done by Brewer.

And then this paper on tensor products of locally free groups.

Now that, I can say, was a proof!

[ Top of Page | Biobib | HOME ]