by Joel Spencer
Ramsey theory
Six points are in general position in space (no three in a line, no four in a plane). The fifteen line segments joining them in pairs are drawn, and then painted, some segments red, some blue. Prove that some triangle has all its sides the same color.
–William Lowell Putnam Competition, 1953 ([3], page 38)
Andrew Gleason’s fascination with combinatorial puzzles, his computational skills, and his algebraic insights often led to interesting deep results in discrete mathematics. We sample some of them here.
The Putnam problem quoted above introduces Ramsey theory, where Gleason made one of his first contributions. Ramsey theory starts with the fact that for any \( k_1 ,\dots, k_r \) there is a least \( n = R (k_1 ,\dots, k_r ) \) such that when each of the \( \binom{n}{2} \) line segments joining \( n \) points in pairs is painted with one of \( r \) colors, then for some \( 1 \leq i \leq r \) there are \( k_i \) points with all segments between them given color \( i \). \( R \) is generally called the Ramsey function and the \( n = R (k_1 ,\dots, k_r ) \) are called Ramsey numbers. Solving the Putnam problem above proves \( R (3, 3) \leq 6 \). The arguments for the existence of \( R (k, l ) \) had been given by Ramsey and, independently, by Erdös and Szekeres in the early 1930s (see [e2] for general reference). Ramsey theory fascinated Gleason.
In 1955 Gleason and coauthor R. E. Greenwood [1] calculated some small Ramsey numbers. They found \( R (3, 3) = 6 \), \( R (3, 4) = 9 \), \( R (3, 5) = 14 \), \( R (4, 4) = 18 \), and \( R (3, 3, 3) = 17 \). The lower bounds sometimes called for ingenious algebraic constructions to provide counterexamples. For instance, to show \( R (3, 3, 3) > 16 \) they consider the 16 points as \( \operatorname{GF} (16) \). Let \( H \subset \operatorname{GF}(16)^{\ast} \) consist of the nonzero cubes. Then they color the edge between \( \alpha , \beta \in \operatorname{GF}(16) \) by the coset of \( \operatorname{GF}(16)^{\ast} /H \) containing \( \alpha - \beta \). Despite great efforts and high speed computers, only a few other values are known today. Even the value of \( R (5, 5) \) seems out of reach.
As a graduate student at Harvard in the late 1960s I chose to write a thesis partially on Ramsey numbers. Gleason told me he had spent a great deal of time looking for other exact values. Since I knew of his legendary calculating powers, I took this as sage advice to restrict my attention to their asymptotics.
Coding theory
Gleason’s research in discrete mathematics began not with Ramsey theory but with his cryptographic work during World War II.1 That’s when he first collaborated with Greenwood. After the war he participated in the burgeoning development of coding theory. Although he published little, he had a significant influence on others.
Vera Pless (her [e3] is a good general reference for coding theory) recalls “Gleason meetings” in the 1950s on error-correcting codes.
These monthly meetings were what I lived for. No matter what questions we asked him on any area of mathematics, Andy knew the answer. The numerical calculations he did in his head were amazing.
A binary code is a subset of \( \{0, 1\}^n \). The elements are called codewords. The weight of a codeword is the number of coordinates with value one. Gleason studied codes with an algebraic representation.
To define a quadratic residue code, begin by identifying \( \{0, 1\}^n \) with \( Z_2 [x]/(x^n - 1) \). Suppose \( n \) is a prime congruent to \( \pm 1 \bmod 8 \). Let \( Q \) be the quadratic residues of \( Z^{\ast}_n \) and set \( e (x) = \sum_{i \in Q} x^i \). Then let \( C = (1 + e (x)) \), the ideal generated by \( 1 + e (x) \) in \( Z_2 [x]/(x^n - 1) \). Then \( C \) is a code of dimension \( (n + 1)/2 \). These codes have proven particularly useful, in part because of their symmetries.
Let \( \overline{C} \) be the code of dimension \( n + 1 \) given by adding a parity bit. (That is, the first \( n \) bits are in \( C \), and the last is such that the weight is even.) A symmetry of \( \overline{C} \) is a permutation \( \sigma \in S_{n +1} \) of the coordinates for which \( \sigma (\overline{C} ) = \overline{C} \) . The Gleason–Prange Theorem [e3] asserts that
A linear code is a \( C \subseteq \{0, 1\}^n \) which is a subspace of \( \{0, 1\}^n \). For such \( C \), \( C^{\perp} \) is the usual (over \( Z_2 \)) orthogonal subspace. When \( C \) has \( A_i \) vectors of weight \( i \), its weight enumerator is defined by \[ W_C(x, y) = \sum^n_{i=0} A_i x^{n-i}y^i. \]
The Gleason polynomials are finite sets of polynomials that generate all weight enumerators of a certain type.
Gleason found a particularly striking example for self-dual codes, those for which \( C = C^{\perp} \).
There are deep connections to invariant theory here.
The weight enumerator of \( C \) determines that of \( C^{\perp} \). The exact relationship was given by Jessie MacWilliams (1917–1990), one of Gleason’s most accomplished students, in her thesis.
Gleason proved much more along these lines in [2]. Neil Sloane starts his paper on “Gleason’s theorem on self-dual codes and its generalizations” [e1] with “One of the most remarkable theorems in coding theory is Gleason’s 1970 theorem about the weight enumerators of self-dual codes.”
The Putnam Exam
Gleason was the first three-time winner of the Putnam competition, finishing in the top five while at Yale in 1940, 1941, and 1942. He was disappointed in his first attempt, because he solved only thirteen of the fifteen problems.
For many years he selected one of the five finishers for the Putnam Fellowship at Harvard, a fellowship he was awarded and declined in 1940 in order to remain at Yale. He wrote (with R. E. Greenwood and L. M. Kelly) a beautiful book [3] on the Putnam competition for the years 1938–64. For many problems his solutions (and there are often several) are splendid lectures in the varied subjects. Elwyn Berlekamp (also a Putnam winner) recalls discussions with him:
[Gleason] would usually offer two or three different solutions to the problem he wanted to talk about, whereas I rarely ever had more than one. He believed that Putnam problems encouraged very strong mastery of what he considered to be the fundamentals of mathematics.
Gleason was always eager to share his passion for mathematics in general and problems in particular. Bjorn Poonen (a multiple Putnam winner and the coauthor of a follow-up book on the Putnam competition 1985–2000 [e4]) recalls his undergraduate days:
Andy struck me as someone genuinely interested in helping younger mathematicians develop. When I was an undergrad at Harvard, he volunteered an hour or two of his time each week for an informal meeting in his office with a group consisting of me and one or two other math undergrads to discuss whatever mathematics was on our minds.
Amen. As a graduate student I had the good fortune to be Andy Gleason’s teaching assistant. We would meet in the small preparation room before the classes. Andy would discuss the mathematics of the lecture he was about to give. He was at ease and spoke of the importance and the interrelationships of the various theorems and proofs. My contributions were minimal, but I listened with rapt attention. It was in those moments that I learned what being a mathematician was all about.