#### by Joel Spencer

#### Ramsey theory

Six points are in general position in space(no three in a line, no four in a plane).The fifteen line segments joining them in pairs are drawn, and then painted, some segments red, some blue. Prove that some triangle has all its sides the same color.–William Lowell Putnam Competition, 1953 ([3], page 38)

Andrew Gleason’s fascination with combinatorial puzzles, his computational skills, and his algebraic insights often led to interesting deep results in discrete mathematics. We sample some of them here.

The Putnam problem quoted above introduces
*Ramsey theory*, where Gleason made one of his
first contributions. Ramsey theory starts with
the fact that for any __\( k_1 ,\dots, k_r \)__
there is a least __\( n = R (k_1 ,\dots, k_r ) \)__ such that when each of the
__\( \binom{n}{2} \)__
line segments joining __\( n \)__ points in pairs is painted
with one of __\( r \)__ colors, then for some __\( 1 \leq i \leq r \)__ there
are __\( k_i \)__ points with all segments between them given
color __\( i \)__. __\( R \)__ is generally called the Ramsey function
and the __\( n = R (k_1 ,\dots, k_r ) \)__ are called Ramsey numbers.
Solving the Putnam problem above proves
__\( R (3, 3) \leq 6 \)__. The arguments for the existence of
__\( R (k, l ) \)__ had been given by Ramsey and, independently,
by Erdös and Szekeres in the early 1930s
(see
[e2]
for general reference). Ramsey theory
fascinated Gleason.

In 1955 Gleason and coauthor
R. E. Greenwood
[1]
calculated some small Ramsey numbers.
They found __\( R (3, 3) = 6 \)__, __\( R (3, 4) = 9 \)__, __\( R (3, 5) = 14 \)__,
__\( R (4, 4) = 18 \)__, and __\( R (3, 3, 3) = 17 \)__. The lower bounds
sometimes called for ingenious algebraic constructions
to provide counterexamples. For instance, to
show __\( R (3, 3, 3) > 16 \)__ they consider the 16 points
as __\( \operatorname{GF} (16) \)__. Let __\( H \subset \operatorname{GF}(16)^{\ast} \)__ consist of the nonzero
cubes. Then they color the edge between
__\( \alpha , \beta \in \operatorname{GF}(16) \)__ by the coset of __\( \operatorname{GF}(16)^{\ast} /H \)__ containing
__\( \alpha - \beta \)__. Despite great efforts and high speed
computers, only a few other values are known
today. Even the value of __\( R (5, 5) \)__ seems out of reach.

As a graduate student at Harvard in the late 1960s I chose to write a thesis partially on Ramsey numbers. Gleason told me he had spent a great deal of time looking for other exact values. Since I knew of his legendary calculating powers, I took this as sage advice to restrict my attention to their asymptotics.

#### Coding theory

Gleason’s research in discrete mathematics began not with Ramsey theory but with his cryptographic work during World War II.1 That’s when he first collaborated with Greenwood. After the war he participated in the burgeoning development of coding theory. Although he published little, he had a significant influence on others.

Vera Pless (her [e3] is a good general reference for coding theory) recalls “Gleason meetings” in the 1950s on error-correcting codes.

These monthly meetings were what I lived for. No matter what questions we asked him on any area of mathematics, Andy knew the answer. The numerical calculations he did in his head were amazing.

A *binary code* is a subset of __\( \{0, 1\}^n \)__. The elements
are called codewords. The weight of a
codeword is the number of coordinates with value
one. Gleason studied codes with an algebraic representation.

To define a *quadratic residue code*, begin by
identifying __\( \{0, 1\}^n \)__ with __\( Z_2 [x]/(x^n - 1) \)__. Suppose
__\( n \)__ is a prime congruent to __\( \pm 1 \bmod 8 \)__. Let __\( Q \)__ be the
quadratic residues of __\( Z^{\ast}_n \)__ and set __\( e (x) =
\sum_{i \in Q} x^i \)__.
Then let __\( C = (1 + e (x)) \)__, the ideal generated by
__\( 1 + e (x) \)__ in __\( Z_2 [x]/(x^n - 1) \)__. Then __\( C \)__ is a code of
dimension __\( (n + 1)/2 \)__. These codes have proven
particularly useful, in part because of their symmetries.

Let __\( \overline{C} \)__ be the code of dimension __\( n + 1 \)__ given by adding a
parity bit. (That is, the first __\( n \)__ bits are in __\( C \)__, and the last is
such that the weight is even.) A symmetry of __\( \overline{C} \)__ is a
permutation __\( \sigma \in S_{n +1} \)__ of the coordinates for which __\( \sigma
(\overline{C} ) = \overline{C} \)__ . The *Gleason–Prange Theorem*
[e3]
asserts that

__\( \operatorname{PSL}_2 (n ) \)__is a subgroup of the group of symmetries of

__\( C \)__.

A *linear code* is a __\( C \subseteq \{0, 1\}^n \)__ which is a subspace
of __\( \{0, 1\}^n \)__. For such __\( C \)__, __\( C^{\perp} \)__ is the usual (over
__\( Z_2 \)__) orthogonal subspace. When __\( C \)__ has __\( A_i \)__ vectors
of weight __\( i \)__, its weight enumerator is defined by
__\[
W_C(x, y) = \sum^n_{i=0} A_i x^{n-i}y^i.
\]__

The *Gleason polynomials* are finite sets of polynomials
that generate all weight enumerators of
a certain type.

Gleason found a particularly striking example
for self-dual codes, those for which __\( C = C^{\perp} \)__.

__\( C \)__is self-dual, then

__\( W_C \)__is generated by

__\( g_1 (x, y ) = x^2 + y^2 \)__and

__\( g_2 (x, y ) = x^8 + 14x^2 y^2 + y^8 \)__.

There are deep connections to invariant theory here.

The weight enumerator of __\( C \)__ determines that
of __\( C^{\perp} \)__. The exact relationship was given by
Jessie MacWilliams
(1917–1990), one of Gleason’s most
accomplished students, in her thesis.

__\[ W_{C^{\perp}}(x,y) = \frac{1}{|C|}W_C (x+y, x-y). \]__

Gleason proved much more along these lines in [2]. Neil Sloane starts his paper on “Gleason’s theorem on self-dual codes and its generalizations” [e1] with “One of the most remarkable theorems in coding theory is Gleason’s 1970 theorem about the weight enumerators of self-dual codes.”

#### The Putnam Exam

Gleason was the first three-time winner of the Putnam competition, finishing in the top five while at Yale in 1940, 1941, and 1942. He was disappointed in his first attempt, because he solved only thirteen of the fifteen problems.

For many years he selected one of the five finishers for the Putnam Fellowship at Harvard, a fellowship he was awarded and declined in 1940 in order to remain at Yale. He wrote (with R. E. Greenwood and L. M. Kelly) a beautiful book [3] on the Putnam competition for the years 1938–64. For many problems his solutions (and there are often several) are splendid lectures in the varied subjects. Elwyn Berlekamp (also a Putnam winner) recalls discussions with him:

[

Gleason]would usually offer two or three different solutions to the problem he wanted to talk about, whereas I rarely ever had more than one. He believed that Putnam problems encouraged very strong mastery of what he considered to be the fundamentals of mathematics.

Gleason was always eager to share his passion for mathematics in general and problems in particular. Bjorn Poonen (a multiple Putnam winner and the coauthor of a follow-up book on the Putnam competition 1985–2000 [e4]) recalls his undergraduate days:

Andy struck me as someone genuinely interested in helping younger mathematicians develop. When I was an undergrad at Harvard, he volunteered an hour or two of his time each week for an informal meeting in his office with a group consisting of me and one or two other math undergrads to discuss whatever mathematics was on our minds.

Amen. As a graduate student I had the good fortune to be Andy Gleason’s teaching assistant. We would meet in the small preparation room before the classes. Andy would discuss the mathematics of the lecture he was about to give. He was at ease and spoke of the importance and the interrelationships of the various theorems and proofs. My contributions were minimal, but I listened with rapt attention. It was in those moments that I learned what being a mathematician was all about.