return

Celebratio Mathematica

Elwyn Berlekamp

Autobiography

by Elwyn Berlekamp

Early chronology

Earned PhD and became assistant professor six years after graduating high school

I was born on Septem­ber 6, 1940. I gradu­ated high school in June 1958, and en­rolled as a fresh­man at MIT that fall. I had a Na­tion­al Mer­it Schol­ar­ship, but no ad­vanced stand­ing. At MIT I ma­jored in elec­tric­al en­gin­eer­ing, in a co­oper­at­ive pro­gram which in­cluded work in in­dustry, usu­ally lead­ing to both BS and MS at the end of five years. My co­oper­at­ive in­dustry was Bell Tele­phone Labs (BTL). At MIT, I took ex­tra courses, more ad­vanced courses, and sev­er­al ad­vanced stand­ing ex­ams. In Decem­ber 1961 I was among five win­ners of the na­tion­al Wil­li­am Low­ell Put­nam Math­em­at­ics Com­pet­i­tion. I re­ceived both my BS and MS de­grees in EE in Septem­ber 1962. My mas­ter’s thes­is, in ar­ti­fi­cial in­tel­li­gence, was su­per­vised at MIT by Prof. John Mc­Carthy (1927–2011) (who soon there­after left MIT for Stan­ford) and at Bell Labs by John L. Kelly, Jr., au­thor of a sem­in­al 1955 pa­per1 on as­set al­loc­a­tion in port­fo­lio the­ory.

I spent the sum­mer of 1963 work­ing at Bell Labs un­der Dav­id Slepi­an a renowned in­form­a­tion and cod­ing the­or­ist. In the spring of 1964 Lot­fi Za­deh, then chair­man of Elec­tric­al En­gin­eer­ing at UC Berke­ley (who later be­came very fam­ous as the founder of fuzzy set the­ory) per­suaded me to come to Cali­for­nia dur­ing spring va­ca­tion to in­ter­view for an as­sist­ant pro­fess­or­ship. They offered me the job as soon as I fin­ished my PhD. I re­turned to MIT to work on my thes­is in earn­est. My com­mit­tee was Bob Galla­ger, Claude Shan­non, Peter Eli­as and Jack Wozen­craft. They all paid at­ten­tion, but they pulled in some­what dif­fer­ent dir­ec­tions. My PhD thes­is, “Block Cod­ing with Noise­less Feed­back”, spawned sev­er­al pub­lic­a­tions. The key to one of them was to re­cast the prob­lem as an asym­met­ric com­bin­at­or­i­al game between the Coder and the Noise­maker, and then to find asymp­tot­ic­ally op­tim­um strategies for play­ing that game.

As a stu­dent at MIT, I had also been very in­ter­ested in courses in the design of di­git­al cir­cuits. Upon ar­riv­ing at UCB in the fall of 1964, I agreed to teach a re­lated un­der­gradu­ate course on “Op­er­at­ing Sys­tems”, a sub­ject which was then con­sidered av­ant garde. The class in­cluded a pre­co­cious stu­dent named Ken Thompson. He was then a seni­or look­ing for a per­man­ent job. He asked me for a job ref­er­ence, but I per­suaded him to stay on an­oth­er year at Berke­ley. He re­ceived an MS un­der my su­per­vi­sion, after which I got him a job in the com­puter sci­ence re­search de­part­ment at Bell Labs, where he had enorm­ous suc­cess. He coin­ven­ted the Unix Op­er­at­ing Sys­tem with Den­nis Ritch­ie for which they re­ceived the 1983 Tur­ing Award.

In fall 1964, I joined the Bay Area’s chapter of the In­sti­tute of Elec­tric­al and Elec­ton­ic En­gin­eer’s (IEEE) In­form­a­tion The­ory So­ci­ety. Oth­er act­ive mem­bers in­cluded Tom Kailath, Tom Cov­er, Norm Ab­ramson, and Jim Omura from Stan­ford; Nel­son Blach­man and Phil Fire from GTE Sylvania; and Jim Spilk­er, an en­tre­pren­eur who star­ted a com­pany called Stan­ford Tele­com­mu­nic­a­tions.

In Janu­ary 1965 Sol Go­lomb re­cruited me to be­come a con­sult­ant to the Space Com­mu­nic­a­tions pro­gram at the Jet Propul­sion Lab (JPL) in Pas­adena, CA. For the next year and a half, I flew down there one day every week, and be­came part of a very stim­u­lat­ing group in­clud­ing Andy Vi­ter­bi and Len Klein­rock (then at UCLA), Ir­win Jac­obs (then at UC­SD), Lloyd Welch (at USC), Gus So­lomon (at JPL), Mar­shall Hall, Richard Stan­ley, and Bob McE­liece (at Cal­tech), and oth­ers.

I left Berke­ley in the sum­mer of 1966 and re­turned to Bell Labs in New Jer­sey. There I joined a group in­clud­ing Henry Pol­lak, Ron Gra­ham, and Neil Sloane. In 1967–1968 I was re­cruited to also be­come a con­sult­ant to the In­sti­tute for De­fense Ana­lys­is’ (IDA) cryp­to­graphy re­search cen­ter in Prin­ceton. I re­turned to UC Berke­ley in 1971, while re­tain­ing many of my in­form­al con­nec­tions with people at BTL, JPL, IDA, and MIT. By 2010, I had pub­lic­a­tions with over 40 coau­thors. Nearly all of them were from those in­sti­tu­tions.

One of my pub­lic­a­tions2 in a math journ­al at­trac­ted the in­terest of Paul Er­dős, the le­gendary it­in­er­ant Hun­gari­an math­em­atician. It con­struc­ted par­ti­tions which ad­dressed the fi­nite Van der Waer­den’s prob­lem. Al­though my pa­per gives the best con­struc­tion known for some ap­pro­pri­ate val­ues of the para­met­ers, it is far from the best bound on the oth­er side. I en­countered Er­dos once or twice every year from 1968 to 1995, and on every oc­ca­sion he ex­pressed dis­ap­point­ment that neither I nor any­one else had sig­ni­fic­antly nar­rowed this gap in Van der Waer­den’s prob­lem.

Coding (and synchronization)

Invented a series of algorithms and implementations which made powerful error-correcting codes useful in many applications

Prof. Avi Widger­son, head of com­puter sci­ence at the In­sti­tute for Ad­vanced Study in Prin­ceton, has de­clared these among the four most im­port­ant al­gorithms of the 20th cen­tury in all of com­puter sci­ence.

Claude Shan­non’s clas­sic 1948 pa­per3 cre­ated the sub­ject of In­form­a­tion and Cod­ing The­ory. Shan­non’s ap­proach was stat­ist­ic­al: he proved by his now clas­sic­al ran­dom cod­ing ar­gu­ments that very good er­ror-cor­rect­ing codes ex­ist, but, ex­cept un­der very re­strict­ive cir­cum­stances such as those ad­dressed by Ham­ming, there were no con­struc­tions or prac­tic­al de­cod­ing al­gorithms. Bounds on the asymp­tot­ic per­form­ance of the best er­ror-cor­rect­ing codes were then quite weak. Shan­non him­self led stud­ies to im­prove them.

My con­tri­bu­tions to in­form­a­tion the­ory began in 1963. I found con­struc­tions for asymp­tot­ic­ally op­tim­um codes for three spe­cial cases:

  1. the bin­ary sym­met­ric chan­nel with noise­less feed­back;
  2. bin­ary eras­ure-burst chan­nels, and
  3. ar­bit­rary dis­crete memory­less chan­nels in the lim­it­ing case of zero rate.

My proof of the op­tim­al­ity of the zero-rate case re­solved one of Shan­non’s fa­vor­ite open prob­lems, and led to my in­clu­sion as a coau­thor, with Shan­non and Galla­ger, of parts one and two of our clas­sic pa­per on per­form­ance bounds of long block codes. An ex­ten­sion of some of that work, which I coau­thored with Ir­win Jac­obs proved the lim­it­a­tions of a then pop­u­lar cod­ing tech­nique us­ing con­vo­lu­tion­al codes with se­quen­tial de­cod­ing.

In 1960, Reed and So­lomon had in­tro­duced a class of codes, which be­came known as RS codes. These codes are op­tim­um when the al­pha­bet size matches the block size. As this im­plies large al­pha­bets for long block sizes, these codes were ini­tially less pop­u­lar than BCH codes, a math­em­at­ic­ally sim­il­ar class of codes in­tro­duced at about the same time by Bose, Chaudhuri and Hoc­quenghem. BCH codes are typ­ic­ally bin­ary. RS codes can also be ef­fect­ive on bin­ary chan­nels when con­cat­en­ated with oth­er codes as pro­posed by For­ney in 1965, and/or when bursts are com­mon. Both BCH codes and RS codes are called “Al­geb­ra­ic codes” be­cause the loc­a­tions of er­rors can be ex­pressed as roots of a poly­no­mi­al over a large fi­nite field.

I be­came in­ter­ested in Al­geb­ra­ic Cod­ing in the mid-1960s. My work in that field provided a long series of ma­jor im­prove­ments to that sub­ject. Best known among these nu­mer­ous res­ults are my al­gorithm to factor poly­no­mi­als over fi­nite fields,4 and the Ber­lekamp–Mas­sey al­gorithm to find lin­ear re­cur­sions in long data streams. I pub­lished both of those al­gorithms in my 1968 book, Al­geb­ra­ic Cod­ing The­ory (New York: Mc­Graw-Hill, 1968, MR0238597). An­oth­er ma­jor new res­ult in that book was the enu­mer­a­tion of the num­ber of in­form­a­tion bits in long bin­ary BCH codes. This book won the IEEE In­form­a­tion The­ory’s an­nu­al best re­search pa­per award. The ap­plic­ab­il­ity of some of these al­gorithms to prob­lems in cryp­to­graphy at­trac­ted the at­ten­tion of the Na­tion­al Se­cur­ity Agency (NSA), who in 1967–1968 re­cruited me to be­come a con­sult­ant to their re­search group at the In­sti­tute for De­fense Ana­lys­is in Prin­ceton, while I re­tained my primary job at Bell Labs. My work there led to my re­cog­ni­tion by (now IEEE-) Eta Kappa Nu as their “Out­stand­ing Young Elec­tric­al En­gin­eer” in 1971. I was nom­in­ated for that award by John R. Pierce (1910–2002), then a Bell Labs ex­ec­ut­ive best known for his earli­er work in­vent­ing mi­crowave com­mu­nic­a­tions.

My al­gorithm to factor poly­no­mi­als over fi­nite fields was soon ex­ten­ded to be­come the first step of sev­er­al al­gorithms to factor poly­no­mi­als over the ra­tion­al in­tegers. Joel Moses at MIT in­cluded these al­gorithms in his early sym­bol­ic ma­nip­u­la­tion sys­tem called MAC­SYMA, a pre­de­cessor of today’s Math­em­at­ica and Maple.

After I left Bell Labs and re­turned to the Berke­ley fac­ulty, I ex­pan­ded my con­sult­ing activ­it­ies, provid­ing more im­prove­ments in al­gorithms for er­ror-cor­rect­ing codes. Some of them were then poorly im­ple­men­ted, lead­ing to ru­mors that my al­gorithms were im­prac­tic­al. To counter this ru­mor, I foun­ded a small com­pany called Cyc­lo­tom­ics, hired three people part-time, and built a small spe­cial pur­pose com­puter christened the “GF1”, (the world’s fore­most Galois Field com­puter). It met the spe­cific­a­tions of a fre­quency hop­ping an­ti­jam mil­it­ary com­mu­nic­a­tions sys­tem called the Joint Tac­tic­al In­form­a­tion Dis­tri­bu­tion Sys­tem (JT­IDS). An RS code over the al­pha­bet of 32 sym­bols lay at the heart of this sys­tem. The De­fense De­part­ment’s JT­IDS pro­gram fun­ded four big prime con­tract­ors. Cyc­lo­tom­ics be­came a sub­con­tract­or to one of them, ITT. Cyc­lo­tom­ics’ RS de­coders beat their com­pet­it­ors by wide mar­gins in all di­men­sions: Less than half the num­ber of com­pon­ents, rack space, weight, and power con­sump­tion, and sev­er­al times their speed. All this even though some of the com­pet­it­ors used the same “Ber­lekamp de­cod­ing al­gorithms” pub­lished in my 1968 book, but lack­ing the fur­ther im­prove­ments and em­bel­lish­ments. Cyc­lo­tom­ics’ GF1 com­puter shattered the then pre­val­ent myth that power­ful high­speed al­geb­ra­ic er­ror-cor­rec­tion was not feas­ible.

The GF1 ar­chi­tec­ture fea­tured a spe­cial Galois Field arith­met­ic unit, as well as a dif­fer­ent Galois Field based in­dex­ing sys­tem and more tra­di­tion­al units man­aging in­put/out­put and data’s ran­dom ac­cess memory (RAM), and the Pro­gram’s Read-Only memory (ROM). All of these units op­er­ated con­cur­rently. GF1 pro­gram­ming was done in a nov­el lan­guage which sim­pli­fied both sim­u­la­tion and test­ing, as well as dir­ect com­pil­a­tion for the ROM. It also fa­cil­it­ated hard­ware de­bug­ging and main­ten­ance via a then state-of-the-art meth­od­o­logy called sig­na­ture ana­lys­is. The GF1 was doc­u­mented in Cyc­lo­tom­ics’ first pat­ent, (now sum­mar­ized on my web­site via a link through the line of mi­cro­code at the bot­tom of my home page). It also be­came the basis of my elec­tion to the Na­tion­al Academy of En­gin­eer­ing (NAE) in 1977. I was then their young­est mem­ber.

The many em­bel­lish­ments and im­prove­ments in­tro­duced by Cyc­lo­tom­ics in­cluded the con­cepts of read­able eras­ures and hel­ic­al in­ter­leav­ing with burst fore­cast­ing. We also de­veloped oth­er mod­els of GF1 com­puters for oth­er ap­plic­a­tions. We de­signed and built the er­ror-cor­rec­tion de­coders for the down­link of the Hubble Space Tele­scope. By the 1980s, NASA God­dard and JPL had each de­veloped its own com­pet­ing “NASA stand­ard” for space com­mu­nic­a­tions. I worked with both of them, and tried to bring them to­geth­er. But my ef­forts failed, and NASA ad­op­ted two com­pet­ing stand­ards, both writ­ten by me. An­oth­er im­port­ant in­nov­a­tion that be­came a NASA stand­ard was my “bit-seri­al Reed–So­lomon en­coder”.5 To the great sur­prise of nearly every­one, it showed that Reed–So­lomon codes, based on large fi­nite fields, can be en­coded with hard­ware even sim­pler than bin­ary codes of the same re­dund­ancy.

A more re­fined de­cod­ing al­gorithm which I coin­ven­ted with Lloyd Welch provided sev­er­al po­ten­tial ad­vant­ages over Ber­lekamp–Mas­sey. Un­der some cir­cum­stances, the be­ne­fits of these new al­gorithms ex­ceeded an­oth­er factor of 10. A Berke­ley gradu­ate stu­dent named Madhu Su­dan in­ven­ted fur­ther ex­ten­sions of the Ber­lekamp–Welch al­gorithm. Su­dan’s de­cod­ing provided more power­ful er­ror-cor­rec­tion for codes of very low rates. He ap­plied this to prob­ab­il­ist­ic proofs, an im­port­ant top­ic in the­or­et­ic­al com­puter sci­ence. Su­dan be­came a pro­fess­or of com­puter sci­ence at MIT and the win­ner of the In­ter­na­tion­al Math So­ci­ety’s 2002 Nevan­linna Prize for out­stand­ing con­tri­bu­tions in Math­em­at­ic­al As­pects of In­form­a­tion Sci­ences. It is con­sidered the Ap­plied Math­em­at­ics Equi­val­ent of a Fields Medal in Pure Math­em­at­ics.

Cyc­lo­tom­ics nev­er had any an­gel in­vestors or ven­ture cap­it­al. It boot­strapped its growth by prof­it­able sales, and even­tu­ally grew to a peak of 40 people. We even­tu­ally re­ceived 12 pat­ents of which I was in­vent­or or coin­vent­or. Some of these ad­dressed is­sues of syn­chron­iz­a­tion and soft de­cisions as well as tra­di­tion­al de­tec­tion and cor­rec­tion. Bey­ond com­mu­nic­a­tions, we also pi­on­eered ap­plic­a­tions of al­geb­ra­ic er­ror-cor­rec­tion tech­no­logy to sev­er­al data stor­age tech­no­lo­gies, in­clud­ing mag­net­ic tapes, mag­net­ic discs, and op­tic­al discs. Rel­ev­ant types of mag­net­ic tapes ranged from a few early im­port/ex­port re­cords made by the U.S. Census Bur­eau in the early 1950s to di­git­ized video re­cor­ded and played at 680 Mega­bits per second in the mid-1980s. At­tempts to li­cense the most ad­vanced cod­ing tech­niques to man­u­fac­tur­ers of read-only com­pact disks were un­suc­cess­ful; they chose to rely in­stead on less power­ful but roy­alty free al­gorithms in my 1968 book. By the mid-1980s there were over 40 US com­pan­ies try­ing to de­vel­op read-write op­tic­al memor­ies. Cyc­lo­tom­ics be­came in­volved in de­vel­op­ing con­trol­lers for sev­er­al of them. The biggest of these was East­man Kodak. Kodak ac­quired Cyc­lo­tom­ics in 1985.

Soon after the Kodak ac­quis­i­tion Cyc­lo­tom­ics launched the first pro­ject to re­cord en­hanced qual­ity di­git­ized sound on movie film rather than on a sep­ar­ate mag­net­ic strip, as was then the cur­rent stand­ard. This pro­ject, which be­came known as Cinema Di­git­al Sound (CDS), (a.k.a. “Di­git­al Op­tic­al Sound Sys­tem”) en­tailed chal­len­ging syn­chron­iz­a­tion is­sues due to mech­an­ic­al jit­ter and weave. It also provided a very chal­len­ging noise en­vir­on­ment. Lloyd Welch and I in­ven­ted and pat­en­ted a solu­tion to these prob­lems. At Cyc­lo­tom­ics we de­signed and built a few pro­to­types. Our busi­ness part­ners In the movie-pro­ject­or di­vi­sion at Op­tic­al Ra­di­ation Corp (ORC), led by en­gin­eers Howard Flem­ming and Ron Uh­lig, com­bined them with new speak­ers, rep­lic­ated and de­ployed them to 60 movie theat­ers. CDS re­ceived a 1995 Sci­entif­ic and En­gin­eer­ing Academy Award (an Oscar) from the Academy of Mo­tion Pic­ture Arts and Sci­ences. But by then I had left Kodak/ Cyc­lo­tom­ics; ORC had suffered a big scan­dal in an­oth­er di­vi­sion; and the busi­ness part­ner­ship between Kodak and ORC had fallen apart.

Jack Mc­Cauley’s first job was a Cyc­lo­tom­ics en­gin­eer work­ing on CDS. He con­tin­ued his ca­reer at Gui­tar Hero, and later cofoun­ded Oculus Rift. By 2015, he was con­sidered by many to be the world’s fore­most en­gin­eer­ing ex­pert on vir­tu­al real­ity.

In the late 1980s my col­leagues at Cyc­lo­tom­ics and I de­signed and built a pro­to­type “hy­per­systol­ic” de­coder, im­ple­ment­ing a rad­ic­ally new ar­chi­tec­ture which avoided prob­lems of clock skew at very high speeds then at­tain­able with Gal­li­um Ar­sen­ide com­pon­ents.

In 2017, Kev­in Neilson at Xel­ic, is now im­ple­ment­ing some of my al­geb­ra­ic de­cod­ing al­gorithms.6 Us­ing a fully pipelined ar­chi­tec­ture, he is im­ple­ment­ing the Ber­lekamp–Rum­sey–So­lomon al­gorithm in an it­er­at­ive de­coder us­ing 6-er­ror-cor­rect­ing BCH com­pon­ent codes. This is used on fiber op­tic chan­nels with rates ex­ceed­ing 100 Gbps.

In the early 1980s, Cyc­lo­tom­ics had a few com­mer­cial cryp­to­graphy con­tracts, in­clud­ing one from Gen­er­al Mo­tors seek­ing to im­prove their elec­tron­ic car keys. In 1984 Cyc­lo­tom­ics spun off its con­sult­ing con­tracts in cryp­to­graphy to a new star­tup which be­came known as Cylink. Its cofounders were Jim Omura (90%) and I (10%). Cylink ob­tained ven­ture fund­ing from a group formed by Jim Si­mons, whom I had met at an IDA in­ter­view in 1967. In 1996 Cylink went pub­lic on NAS­DAQ. Ad­jus­ted for in­fla­tion, it be­came a uni­corn with over 400 em­ploy­ees. Its products en­cryp­ted all of Swift’s for­eign ex­change trans­ac­tions, total­ing sev­er­al tril­lion dol­lars per day. Cylink en­dured a boom, a scan­dal, a bust, and a par­tial re­cov­ery, all a couple years ahead of many oth­er Sil­ic­on Val­ley com­pan­ies of that “Dot­com” era. Cylink was ac­quired by Safen­et in 2002.

Cyc­lo­tom­ics had a small di­vi­sion led by Steve Pope which be­came an early de­sign­er of cus­tom ICs, fab­ric­ated by out­side foundries. We re­ceived tech­nic­al and busi­ness ad­vice from Cyc­lo­tom­ics’ stock­hold­ers Dave Hodges and John Torode. In the late 1980s John Torode and his wife formed a Seattle-based com­pany named IC Designs, in which I be­came an act­ive in­vestor and the third board mem­ber. We de­veloped some nov­el clock-gen­er­at­ing cir­cuits which in­cluded both ana­log and di­git­al on the same chip, and the busi­ness boomed. In 1993 we were ac­quired by Cypress Semi­con­duct­ors and be­came “Cypress Tim­ing Tech­no­logy.”

I re­ceived the IEEE’s Koji Kobay­ashi Award in 1990 and the IEEE Ham­ming Medal in 1991. In 1993 I re­ceived the IEEE In­form­a­tion The­ory So­ci­ety’s Shan­non Award. What turned out to be the last of my ma­jor pub­lic­a­tions on new al­geb­ra­ic de­cod­ing al­gorithms was pub­lished in the IEEE-IT Trans­ac­tions in 1996. (In some spe­cial cir­cum­stances, it achieved yet an­oth­er or­der of mag­nitude per­form­ance im­prove­ment.) I re­ceived an IEEE Golden Ju­bilee Award in 1998. I was elec­ted to the Na­tion­al Academy of Sci­ences in 1999. The third edi­tion of my book, Al­geb­ra­ic Cod­ing The­ory was pub­lished by World Sci­entif­ic Pub­lish­ing in 2014.

Quantitative finance

Pioneered algorithmic trading

In 1986 I be­came a con­sult­ant to Ax­com Trad­ing Ad­visors, a small com­pany loc­ated in New­port Beach, CA. Its founder and CEO was Jim Ax. They re­lied on com­puter al­gorithms to man­age a pool of in­vest­ments in fu­tures con­tracts called the “Medal­lion Fund”. Typ­ic­al hold­ing peri­ods for their as­sets were one or two weeks. The pool man­ager was an­oth­er then small com­pany called Renais­sance Tech­no­lo­gies, owned and man­aged by James H. Si­mons. Al­though I vis­ited Ax­com only one or two days per month, I be­came in­ter­ested in Ax­com’s al­gorithms for pre­dict­ing short-term price move­ments based en­tirely on ana­lys­is of their re­cent price move­ments. Ax for­mu­lated sev­er­al re­lated math­em­at­ic­al prob­lems, some of which I solved. Fol­low­ing a big gain in the spring of 1988, Ax bought a big house in Malibu, more than an hour’s drive away from New­port Beach. Ax tried to man­age the com­pany by video con­fer­en­cing. This proved dif­fi­cult, even though the com­pany al­ways had less than a dozen people. Mar­kets turned un­friendly and the Medal­lion Fund began to drift down­wards. In fall 1988 Ax res­isted my ef­forts to im­ple­ment mod­est im­prove­ments in the trad­ing al­gorithms, fa­vor­ing in­stead more work on grander schemes in hopes of get­ting even big­ger im­prove­ments. After be­ing out of touch for sev­er­al months in the spring of 1989 dur­ing my friendly exit from Kodak/Cyc­lo­tom­ics, I showed up at Ax­com in early sum­mer 1989. I learned that the fund had con­tin­ued its down­ward drift and that Ax, the man­ager of the trad­ing, was in a big dis­pute with Si­mons, the man­ager of all of their in­vestors. I re­solved this dis­pute by buy­ing Ax out, re­struc­tur­ing the con­tracts with Si­mons to give Si­mons a 25% in­terest in Ax­com in re­turn for his not tak­ing 25% of the fees off the top. With the ap­prov­al of the oth­er Ax­com part­ner, Sandor Straus, the com­pany and its re­main­ing three em­ploy­ees moved to Berke­ley, im­ple­men­ted the pro­posed em­bel­lish­ments in trad­ing al­gorithms, and re­sumed trad­ing. The mar­kets were friendly. In the fall of 1989, Ax­com re­covered all that it had lost dur­ing the first half of that year. More im­prove­ments and a few more em­ploy­ees were ad­ded. In cal­en­dar 1990, Medal­lion in­vestors en­joyed a gain of over 55%, net after big fees which then con­sisted of 5% of as­sets plus 20% of profits.

In Janu­ary 1991 I sold all of my own­er­ship in­terests in Ax­com to Jim Si­mons and re­turned to aca­dem­ic pur­suits.

De­tails of al­gorithms in fin­ance, like those in gov­ern­ment cryp­to­graphy, are best kept secret, al­though some vague high-level guidelines have emerged in such art­icles as the cov­er story of Bloomberg Magazine’s Decem­ber 2007 is­sue. I be­lieve the first key to suc­cess is to hire people trained as en­gin­eers, math­em­aticians, stat­ist­i­cians, or phys­ic­al sci­ent­ists, NOT tra­di­tion­al money man­agers who were trained as MBAs.

Al­though 1990’s net gain of 55% was then Medal­lion’s best yet, they con­tin­ued to do ex­traordin­ar­ily well. They con­tin­ued hir­ing math­em­aticians, en­gin­eers, and sci­ent­ists. They ex­pan­ded in­to trad­ing stocks as well as fu­tures. Every cal­en­dar year has been prof­it­able (al­though from mid-1988 to mid-1989 was down over 20%). In some years net gains were over 100%. In­cent­ive fees were in­creased to over 40% of gross profits. Des­pite this, over sev­er­al dec­ades, this fund’s an­nu­al­ized net re­turns to in­vestors have ex­ceeded 30%. It has made Jim Si­mons the most suc­cess­ful fund man­ager on Wall Street, by far.

Un­der Ax the hold­ing time of a po­s­i­tion was typ­ic­ally a week or two. Un­der me, it be­came short­er. Be­cause of this rap­id turnover, Medal­lion’s trad­ing volume was much high­er than the size of the fund. So as the fund grew lar­ger, it be­came so big that its own trad­ing moved the mar­kets. To mit­ig­ate this prob­lem, Si­mons began re­deem­ing all in­vestors oth­er than Renais­sance em­ploy­ees. The rest of us were all squeezed out in the early 00s.

In the late 00s, four new part­ners and I launched an­oth­er quant­it­at­ively man­aged fund, called “Berke­ley Quant­it­at­ive” (BQ). After a couple years of de­vel­op­ment and stud­ies, we ac­cep­ted money from out­side in­vestors and began trad­ing. As more in­vestors entered, the fund size grew. Net per­form­ance reached 17%, and the fund size reached \$250 mil­lion about 1.5 years after trad­ing had be­gun. But the next couple months saw un­friendly mar­ket con­di­tions ag­grav­ated by BQ’s in­tern­al or­gan­iz­a­tion­al prob­lems. In­vestors suffered a 15% draw­down. Some big in­vestors with­drew their money. Berke­ley Quant­it­at­ive re­deemed its re­main­ing in­vestors, paid all of its debts, and closed down. The ini­tial in­vestors real­ized a net re­turn of about 2% after about three years. Def­in­itely not a suc­cess, but much bet­ter than the wipeouts ex­per­i­enced by most star­tups.

Combinatorial game theory

Cofounded combinatorial game theory, an important new branch of mathematics

My fas­cin­a­tion with math­em­at­ic­al games began when I learned to play Dots-and-Boxes in the first grade. Since then, I have dis­covered sev­er­al math­em­at­ic­al the­or­ems which un­der­lie this game and oth­ers. In 1967 I met Richard K. Guy who shared these in­terests. I pro­posed jointly writ­ing a book on that sub­ject. Guy re­cruited John H. Con­way to join the ef­fort, and our col­lab­or­a­tion began when we con­vened at a con­fer­ence at Ox­ford in 1969. Con­way then lived in Eng­land; Guy in Canada, and I in Cali­for­nia. Con­way spent most of 1971–72 vis­it­ing Cal­tech, and I met him there one even­ing every week after spend­ing the day at JPL. New ideas and new res­ults ap­peared at a rap­id pace which con­tin­ued throughout the 1970s. Dur­ing this dec­ade all three of us con­verged every Decem­ber and June at Richard Guy’s home in Cal­gary. Our title was chosen to be Win­ning Ways for your Math­em­at­ic­al Plays. Among many new dis­cov­er­ies, Con­way in­ven­ted sur­real num­bers, a clas­si­fic­a­tion of in­fin­it­ies more rel­ev­ant to ivory-tower math­em­at­ic­al lo­gic than to any play­able board game. I viewed it as a di­gres­sion from the pur­pose of the book, so Con­way spun off and pub­lished it as a sep­ar­ate book called On Num­bers and Games. Joint work on Win­ning Ways re­sumed not long after that dis­rup­tion, and the scope of the book ex­pan­ded again and again. Its first edi­tion, pub­lished in 1982, com­prised two volumes with two parts each. Its second edi­tion ex­pan­ded each of these parts in­to its own sep­ar­ate volume, pub­lished sep­ar­ately in the eas­ily re­called years of 2001, 2002, 2003, and 2004.

Ac­cord­ing to Mar­tin Gard­ner, au­thor of the ex­tremely pop­u­lar “Math­em­at­ic­al Games” column in the Sci­entif­ic Amer­ic­an magazine from 1957 to 1982, Win­ning Ways is the “greatest con­tri­bu­tion of the 20th cen­tury to the bur­geon­ing field of re­cre­ation­al math­em­at­ics. No oth­er work has been so packed with com­pletely new and sig­ni­fic­ant ma­ter­i­al, or presen­ted with so much wit, depth, and clar­ity.” Win­ning Ways also be­came the found­a­tion of a less re­cre­ation­al sub­ject called “Com­bin­at­or­i­al Game The­ory”, CGT. It was of­fi­cially ac­cep­ted by Math Re­views (now Math­S­ciNet) as a new branch of math­em­at­ics. It has at­trac­ted the in­terest and con­tri­bu­tions of many math­em­aticians and com­puter sci­ent­ists.

In 1987 the Amer­ic­an Math­em­at­ic­al So­ci­ety sponsored a sym­posi­um com­mem­or­at­ing the mul­ti­fa­ceted works of John von Neu­mann the le­gendary math­em­atician, eco­nom­ic game the­or­ist, com­puter ar­chi­tect, quantum mech­an­ics the­or­ist, etc. I gave the in­vited lec­ture on game the­ory. Someone noted that most of the over 200 games in Win­ning Ways were in­ven­ted by the au­thors, and asked me wheth­er this the­ory ap­plied to any “real” games. My an­swer was that it cer­tainly ap­plied to the pop­u­lar kids’ game of “Dots-and-Boxes”, and pos­sibly also to the clas­sic East Asi­an board game known as “Go”. But the ques­tion and my an­swer left me un­com­fort­able, and pro­voked my re­newed study of Go. Over the next year, I dis­covered that late-stage Go en­dgames could be sim­pli­fied by “chilling” (a spe­cial case of the “cool­ing” op­er­at­or in Win­ning Ways), and that al­though cool­ing was gen­er­ally not in­vert­ible, in sev­er­al spe­cial cases, in­clud­ing Go, chilling could be in­ver­ted by a new op­er­at­or I called “warm­ing”. Dav­id Wolfe (1965(?)-), then my gradu­ate stu­dent, joined this re­search ef­fort. We used these tech­niques to find prov­able strategies for play­ing many late-stage Go en­dgame prob­lems. We pub­lished the book, Math­em­at­ic­al Go. It re­ceived rave com­ments from a wide spec­trum of re­view­ers. It in­cluded sev­er­al prob­lems we com­posed, some of whose solu­tions en­tailed se­quences last­ing over 100 moves. The prob­lems were dif­fi­cult enough to stump vir­tu­ally all pro­fes­sion­al Go play­ers, among whom Wolfe and I ac­quired a repu­ta­tion as com­posers of very hard en­dgame prob­lems. But, as it turns out, these prob­lems all quickly yield to the ap­plic­a­tion of the rel­ev­ant CGT.

In ef­forts to quant­ize the im­port­ance of vari­ous choices of moves in earli­er-stage Go en­dgames, I de­vised a vari­ation of the game, which I called “Coupon Go”. Mul­ti­day Coupon Go tour­na­ments among some of the world’s best pro­fes­sion­al Go play­ers have been held in Seoul and Beijing. Some have ad­ap­ted this as an im­port­ant teach­ing tech­nique. These in­clude Nai­wei Rui, many time world’s wo­men’s cham­pi­on, and her hus­band Jujo Ji­ang, by far the best and most fam­ous Go play­er in North Amer­ica in the 1990s.

By the turn of the cen­tury, CGT had been pro­duct­ively ap­plied to an in­creas­ingly wide col­lec­tion of pop­u­lar board games. These in­clude the an­cient game of Hawaii­an check­ers, known as Kon­ane. They also in­clude a mod­ern game called Amazons, which was in­ven­ted by Wal­ter Zamkauskas in Ar­gen­tina in 1988. It has be­come pop­u­lar on the web.

In 2014, the Amer­ic­an Math­em­at­ic­al So­ci­ety pub­lished a bril­liant treat­ise en­titled Com­bin­at­or­i­al Game The­ory. Its au­thor is Aaron N. Siegel, a former PhD stu­dent of mine. It con­tains nearly all that was known, and much that wasn’t known be­fore then.

STEM education (and its popularization)

Contributed to the popularization of recreational mathematics and its use as a vehicle to increase interest in STEM education

I have also been act­ive in the pop­ular­iz­a­tion of Sci­ence, Tech­no­logy, En­gin­eer­ing and Math­em­at­ics, dir­ec­ted both at K–12 edu­ca­tion and at adults.

Since 1999, my ef­forts to pop­ular­ize these sub­jects in­clude my coau­thor­ship with Joe Buhler of the “Puzzles” column” in the semi­an­nu­al news­let­ter, Emis­sary, pub­lished by the Math­em­at­ic­al Sci­ences Re­search In­sti­tute (MSRI) . One prob­lem I pub­lished in 2012 sheds light on the causes of polit­ic­al grid­lock.

I have served on the gov­ern­ing boards of each of the two fore­most private schools in the East Bay: Col­lege Prep and Head-Royce. But my big­ger ef­forts have been dir­ec­ted to­wards ex­tra­cur­ricular edu­ca­tion. In the late 1970s, I re­cruited a team of gradu­ate stu­dents who beefed up the in­tel­lec­tu­al con­tent of the very pop­u­lar com­puter games at the Lawrence Hall of Sci­ence. As chair­man of the gov­ern­ing board of MSRI in the late 1990s, I helped fin­ance Zvezdelina Stankova’s ini­ti­at­ive to start the Berke­ley Math Circle, where I have also been an an­nu­al guest teach­er. The circle has a few dozen stu­dents, now typ­ic­ally ju­ni­or high school age, who meet in a classroom one even­ing per week to share and en­joy solv­ing math or lo­gic prob­lems not covered in their school cur­ricula. Sam Vandervelde WROTE a book, Circle in a Box, ex­plain­ing how to or­gan­ize and run such a circle. There are now over a hun­dred of these circles act­ive in the United States.

One key to the suc­cess of such ef­forts is to erad­ic­ate (or at least blur) the line many people ima­gine between “real” math­em­at­ics/en­gin­eer­ing and “re­cre­ation­al math­em­at­ics”. There are many games which straddle this ima­gined di­vide.

Math­em­at­ics lies all around us. It lies un­noticed just be­low the sur­face of many things, such as the game of Dots-and-Boxes.

Since 2015, I have de­voted much of my time to the pre­par­a­tion of short in­tro­duct­ory videos aimed at get­ting more ju­ni­or high school stu­dents in­ter­ested in com­bin­at­or­i­al games and some of the math­em­at­ics be­hind them. Links to these videos can be found on my web­site and/or my chan­nel on You­Tube.

In 2017, I be­came the ini­tial donor to fund “Amer­ica Asks, Sci­ence An­swers”, a new pub­lic in­form­a­tion cam­paign by the Na­tion­al Academies.

Leadership

In the early 1970s, I was chair­man of a group called FO­CUS, which was the sci­ence and aca­dem­ic ad­vis­ory pan­el for the In­sti­tute of De­fense Ana­lys­is’ cryp­to­graphy re­search group in Prin­ceton. Prof. Creighton Buck from the Uni­versity of Wis­con­sin was an­oth­er act­ive mem­ber of that group. Fol­low­ing this, in 1974, J. C. R. Lick­lider ap­poin­ted me to chair a high-level year-long study of fu­ture mass memory tech­no­lo­gies, fun­ded by the De­fense Ad­vanced Re­search Pro­jects Agency (DARPA).

In 1973 I was Pres­id­ent of the IEEE In­form­a­tion The­ory So­ci­ety. We be­came act­ive par­ti­cipants in the Nix­on–Kis­sing­er era of de­tente. We in­vited R. L. Dobrush­in, a well known Rus­si­an in­form­a­tion the­or­ist, to write a sur­vey of in­form­a­tion the­ory in the So­viet Uni­on. When his lengthy pa­per ar­rived, our at­tempts to find an ex­per­i­enced trans­lat­or failed, so I trans­lated it my­self. It was pub­lished in IEEE-IT.7 We gave our an­nu­al best pa­per award to a young Rus­si­an named Valery Goppa AND I presen­ted it to him in Mo­scow. His later work foun­ded the spe­cialty of al­geb­ra­ic geo­metry codes. We IDA also in­aug­ur­ated our now an­nu­al Shan­non Award, and I presen­ted the first such award to Shan­non him­self at our In­ter­na­tion­al Sym­posi­um in Ashkelon, Is­rael.

In 1974, the IEEE Press pub­lished my book, Key Pa­pers in the De­vel­op­ment of Cod­ing The­ory (New York: IEEE Press, 1974, MR0384299).

In 1974, a search com­mit­tee led by Creighton Buck offered me the dir­ect­or­ship of the Uni­versity of Wis­con­sin’s Army Math­em­at­ics Re­search Cen­ter, a strong re­search group that had be­come fam­ous for be­ing bombed by four an­ti­war act­iv­ists in 1970. It was a tempt­ing of­fer, but I de­cided to re­main at Berke­ley.

In 1973, I be­came one of a dozen fac­ulty cofounders of the Com­puter Sci­ence Di­vi­sion with­in UC Berke­ley’s De­part­ment of Elec­tric­al En­gin­eer­ing and Com­puter Sci­ence. I was then also a mem­ber of the MIT Cor­por­a­tion’s Vis­it­ing Com­mit­tee for their Elec­tric­al En­gin­eer­ing De­part­ment, where I played a minor role in get­ting MIT’s EE De­part­ment to fol­low Berke­ley by chan­ging their name to EECS in 1974.

In 1975–1977, I was UCB’s As­so­ci­ate Chair­man of EECS for their CS Di­vi­sion. It was a stel­lar group, whose fac­ulty and alumni have sub­sequently re­ceived nine Tur­ing Awards. Dur­ing this peri­od we re­cruited some stel­lar vis­it­ing pro­fess­ors, in­clud­ing Ken Thompson of Bell Labs (1983 Tur­ing awardee), Jan Ra­jch­man (a pro­lif­ic in­vent­or of over 100 pat­ents from RCA, who had served on Ber­lekamp’s DARPA pan­el), Alan Ko­tok (My MIT class­mate who had be­come the lead de­sign­er of DEC’s PDP8, PDP10, and KL10 com­puters), and Gus So­lomon (of JPL, the coin­vent­or of Reed–So­lomon codes). I per­son­ally re­cruited sev­er­al new fac­ulty, in­clud­ing Dav­id Pat­ter­son, THEN fresh out of gradu­ate school at UCLA. Dur­ing this peri­od, the CS Di­vi­sion also pro­moted to ten­ure sev­er­al not­ables in­clud­ing Mike Stone­b­raker (2014 Tur­ing awardee) and Susan Gra­ham (who CS be­came chair­man of Har­vard’s Over­seers shortly be­fore the end of Larry Sum­mers’ pres­id­ency there). Among many CS gradu­ate stu­dents at UCB in the mid-1970s who went on to not­able ac­com­plish­ment and fame were Mike Sipser (now Dean of Sci­ence at MIT), Bill Joy (a soft­ware su­per­guru and cofounder of Sun Mi­crosys­tems), and Eric Schmidt (later CEO of Google).

In 1979, I joined MSRI founders Chern, Sing­er and Moore as well as UC math de­part­ment chair­man Al­berto Grun­baum, in a meet­ing which per­suaded UC Chan­cel­lor Bowker to “sup­port” (verbally but not fin­an­cially) MSRI as an al­lied but in­de­pend­ent off-cam­pus en­tity, with its own board of gov­ernors, neither con­trolled nor over­seen by UCB.

Like oth­ers with Top Secret clear­ances in cryp­to­graphy, I have re­frained from pub­lish­ing any pa­pers dir­ectly re­lat­ing to cryp­to­graphy, wheth­er they are clas­si­fied or not.

In 1976 the IEEE-IT Trans­ac­tions pub­lished a pa­per by Dif­fie and Hell­man which in­tro­duced a sys­tem for pub­lic ex­change of cryp­to­graphy keys.8 Oth­er aca­dem­ic pa­pers on cryp­to­graphy soon ap­peared in oth­er journ­als. This aroused con­cerns at NSA that this in­form­a­tion might be use­ful to U.S. ad­versar­ies. To ad­dress these con­cerns, Ad­mir­al Bobby In­man, then Dir­ect­or of NSA, and his chief law­yer, Dan Sil­ver, in­vited me to con­vene a meet­ing of cog­niz­ant aca­dem­ics. I con­vened that meet­ing at the UC Berke­ley Fac­ulty Club in Novem­ber 1979. Key at­tendees in­cluded Bobby In­man him­self and Berke­ley law pro­fess­or Mi­chael Hey­man. The meet­ing led to a com­mis­sion ap­poin­ted by the Amer­ic­an Coun­cil on Edu­ca­tion, whose mem­bers were rep­res­ent­at­ives of the rel­ev­ant schol­arly so­ci­et­ies. Mike Hey­man was chair­man. I rep­res­en­ted the IEEE. Dur­ing the course of that com­mis­sion’s work, Hey­man was pro­moted to be­come Chan­cel­lor of UCB.

In 1978, McE­liece, van Tilborg and I coau­thored a pa­per on the in­her­ent in­tract­ab­il­ity of cer­tain cod­ing prob­lems.9 The main res­ult was that the prob­lem of de­cod­ing cer­tain vari­ants of Goppa codes is prov­ably hard, in a tech­nic­al sense known in com­puter sci­ence as “NP Com­plete”. Al­though ap­par­ently un­re­lated to cryp­to­graphy, a sub­sequent pa­per by McE­liece used this res­ult to prove that the crack­ing of a new cryp­to­graph­ic sys­tem he de­signed is also NP-hard. The­or­et­ic­ally, that’s a stronger stand­ard of se­cur­ity than any com­mer­cial sys­tem in wide us­age, either then or now.

In the mid-1980s, I served on the UC Pres­id­ent’s “Sci­ence and Aca­dem­ic Ad­vis­ory Group” which over­saw the De­part­ment of En­ergy’s clas­si­fied nuc­le­ar en­ergy Labs at Liv­er­more and Los Alam­os.

I have been an act­ive mem­ber of the Fin­ance Com­mit­tee of the Na­tion­al Academy of Sci­ences (NAS) since 2000. From 2007–2013, I also served on the sep­ar­ate Fin­ance Com­mit­tee of the Na­tion­al Academy of En­gin­eer­ing (NAE) with then NAE treas­urer Dan Mote. I was the only per­son to have served on both.

Throughout the 1980s, 97% of MSRI’s sup­port came from NSF and oth­er fed­er­al agen­cies. Near the end of that dec­ade, MSRI Dir­ect­or Irving Ka­plansky ap­poin­ted a three-per­son group to raise ad­di­tion­al fund­ing from private sources. Its mem­bers were Wil­li­am Ran­dolph Hearst, III, Jim Si­mons, and I. After sev­er­al un­suc­cess­ful for­ays over about a year, Hearst and Si­mons quit, but I per­sisted. I ex­pan­ded the group to be­come MSRI’s “Friends”. There were many more meet­ings and for­ays, but neg­li­gible suc­cess. This situ­ation con­tin­ued through Ka­plansky’s re­tire­ment and re­place­ment by Bill Thur­ston, a le­gendary to­po­lo­gist and Fields Medal­ist. He shared my be­lief that MSRI needed at­ten­tion-get­ting phys­ic­al ob­jects rather than only pic­tures of math­em­aticians writ­ing for­mu­las on black­boards. A sculp­ture called the “7-fold way” was fin­anced by a \$25,000 gift from Mit­subishi Elec­tric Re­search Labor­at­or­ies, for whom I had con­sul­ted the pri­or year. It was MSRI’s largest dona­tion up un­til then.

MSRI’s board of gov­ernors ini­tially con­sisted of the chair­men of the math de­part­ments of MSRI’s spon­sor­ing in­sti­tu­tions. These were about a dozen west coast re­search uni­versit­ies in­clud­ing U. Wash­ing­ton, U. Ore­gon, UC Berke­ley, Stan­ford, UC­SC, UCLA, Cal­tech, USC, UC Irvine, UC­SD. Thur­ston nom­in­ated me to be­come chair­man of this board in 1994, in hopes of broad­en­ing it to in­clude some pro­spect­ive donors. But dona­tions re­mained near zero. When Thur­ston’s term as dir­ect­or ex­pired in 1995, I chaired the search com­mit­tee for his suc­cessor. I kept some key pro­spect­ive donors aware of its pro­gress. Dav­id Eis­en­bud was hired as a new dir­ect­or in 1997. Si­mons and Hearst joined the board soon there­after, and so did Sandor Straus, Ro­ger Strauch, Ed Baker and oth­ers. My term as chair­man ended in 1998, but Eis­en­bud then raised over \$10 mil­lion in less than a dec­ade. This mostly fin­anced a new aud­it­or­i­um. Sev­er­al named postdocs were es­tab­lished, and a grow­ing en­dow­ment cam­paign began. I re­main an act­ive MSRI trust­ee, in­volved in sev­er­al of MSRI’s out­reach pro­grams, in­clud­ing Na­tion­al Math Circles, the Bay Area Math­em­at­ics Olympi­ad for high school stu­dents, and the Na­tion­al Math Fest­ivals in Wash­ing­ton, D.C. The first one, in spring 2015, in­cluded a pro­gram which at­trac­ted 10 sit­ting US Sen­at­ors and a few staffers and mem­bers of the House of Rep­res­ent­at­ives. The second fest­iv­al in spring 2017, like the first one in 2015, at­trac­ted over 20,000 people to pub­lic events in Wash­ing­ton, D.C.

In 2000, Dave Hodges per­suaded me to join the board of the In­ter­na­tion­al Com­puter Sci­ence In­sti­tute (IC­SI), an­oth­er off-cam­pus in­sti­tute in Berke­ley. Its budget was about double MSRI’s, but math­em­at­ics re­search is much cheap­er than CS, and IC­SI is not as well known. In most of the 1990s, IC­SI’s ma­jor fund­ing came from a con­sor­ti­um of European gov­ern­ments led by the Ger­mans, who used it as a postdoc­tor­al train­ing in­sti­tute. But a change of Ger­man policy led to fin­an­cial prob­lems at IC­SI, which were mit­ig­ated primar­ily by IC­SI Dir­ect­or Nel­son Mor­gan’s suc­cess­ful ap­plic­a­tions for in­creased con­tract re­search sup­port from Amer­ic­an com­pan­ies such as at AT&T Labs. Hodges nom­in­ated me to be his suc­cessor as chair­man of IC­SI’s board. As chair­man, I traveled to Hel­sinki and Mad­rid in sup­port of ef­forts to per­suade oth­er European coun­tries to take ad­vant­age of the de­fi­cit left by the Ger­man cut­backs. I also re­cruited a new board mem­ber, Cliff Hig­ger­son. A Berke­ley alum liv­ing in Pa­lo Alto, he was a very suc­cess­ful ven­ture cap­it­al­ist whose seed-stage “Van­guard IV” fund in­cluded Cienna, Ad­vanced Fiber Op­tics, and Net­work Ap­pli­ance. These three big hits of the dot.com era made this fund among the most suc­cess­ful ven­ture funds ever. (In a sur­pris­ingly short time frame, it re­turned to in­vestors over 15 times their paid-in cap­it­al). Hig­ger­son be­came my suc­cessor as chair­man of IC­SI.

Back in 1993, Thomas M. Rodgers, an At­lanta-based busi­ness­man and mech­an­ic­al puzzle en­thu­si­ast, re­cruited me and a fam­ous ma­gi­cian named Mark Set­te­du­cati to help him or­gan­ize a “Gath­er­ing” to hon­or Mar­tin Gard­ner, the pro­lif­ic au­thor who had suc­ceeded in pop­ular­iz­ing re­cre­ation­al math­em­at­ics, tab­letop phys­ics, mech­an­ic­al puzzles, ma­gic tricks, and oth­er such top­ics through his 299 monthly Sci­entif­ic Amer­ic­an columns and many books. Un­like a con­ven­tion­al con­fer­ence or sym­posi­um, where most at­tendees come to hear lec­tures from a small set of speak­ers, Rodgers wanted much more audi­ence in­volve­ment in this “Gath­er­ing for Gard­ner” (G4G). Most at­tendees were re­quired to make a short present­a­tion and bring a few hun­dred cop­ies of an ori­gin­al puzzle or math­em­at­ic­al art ob­ject, to be dis­trib­uted among all oth­er at­tendees. The event was a big suc­cess. It was re­peated as G4G2 in 1996, G4G3 in 1998, …, con­tinu­ing in every oth­er sub­sequent even-numbered year. Its at­tend­ance in­cludes some of the world’s best math­em­aticians and ma­gi­cians. It was ori­gin­ally fun­ded only by re­gis­tra­tion fees plus Rodgers’ per­son­al con­tri­bu­tions. Even­tu­ally it grew in­to a non­profit cor­por­a­tion, al­though still run largely out of Rodgers’ pock­et. Fol­low­ing Mar­tin Gard­ner’s death in 2010, G4G ini­ti­ated a pleth­ora of widely dis­trib­uted loc­al events known as “Cel­eb­ra­tion of Mind” (CoM). They were ori­gin­ally in­ten­ded to be held sim­ul­tan­eously on Gard­ner’s birth­day, Oc­to­ber 21. But that re­quire­ment was later loosened to “late Oc­to­ber”. When Rodgers died in 2012, I be­came chair­man of the G4G board. I guided the or­gan­iz­a­tion through its trans­ition to great­er sta­bil­ity. In 2014, there were over 150 Cel­eb­ra­tion of Mind events, on all sev­en con­tin­ents (in­clud­ing Ant­arc­tica!). Al­though I re­tired from this po­s­i­tion in 2016, G4G con­tin­ues to thrive. Since 2016–2017, its chair­man has been Nancy Blach­man and its pres­id­ents have been Mark Mit­ton (the best ma­gi­cian on the east coast of the US), and Erik De­maine (a su­per­star com­puter sci­ence pro­fess­or at MIT, widely known for his solu­tion of “Ori­gami” and nu­mer­ous clev­er al­gorithms). The most re­cent Gath­er­ing, G4G13 on April 11–15, 2018, con­tin­ued this grand tra­di­tion. Much more in­form­a­tion about G4G and CoM, in­clud­ing many pic­tures and videos, can be found on the web at cel­eb­ra­tionof­mind.org.

Service

I’ve served on over 45 boards of vari­ous kinds.

Teaching

Supervised and mentored students who have had a significant impact on electrical engineering and computer science

A list of my thes­is stu­dents ap­pears on my web­site.10 Many have gone on to happy, pro­duct­ive, and dis­tin­guished ca­reers. Here are a few which might be con­sidered es­pe­cially note­worthy:

Ken Thompson (MS 1966) coin­ven­ted the Unix op­er­at­ing sys­tem. He also wrote “Belle”, the world’s com­puter chess cham­pi­on in the mid-1980s, and ment­ored Eric Schmidt.

Oscar Moreno (PhD 1973) spent most of his ca­reer as a pro­fess­or in Pu­erto Rico. He also foun­ded a com­pany which man­aged the Pu­erto Ric­an in­ter­net do­main.

Robert Li’s PhD thes­is (1974) pi­on­eered the study of loopy com­bin­at­or­i­al games. He later wrote the book on Al­geb­ra­ic Switch­ing The­ory,11 and be­came pro­fess­or and chair­man of the De­part­ment of In­form­a­tion En­gin­eer­ing at the Chinese Uni­versity of Hong Kong. Late in his ca­reer, he coau­thored his first pa­per in in­form­a­tion and cod­ing the­ory, called “Net­work Cod­ing”.12 It won the IEEE-IT 2005 an­nu­al “best re­search pa­per” award.

Larry Carter (PhD 1974) be­came pro­fess­or and chair­man of the De­part­ment of Com­puter Sci­ence and En­gin­eer­ing at UC San Diego.

Po Tong (PhD 1982) coin­ven­ted hy­per­systol­ic ar­chi­tec­ture at Cyc­lo­tom­ics, and later be­came lead de­sign­er of Reed–So­lomon de­cod­ing chips at LSI lo­gic and Texas In­stru­ments.

Jorge-Nuno Silva (MS 1991) be­came pro­fess­or at Uni­versity of Lis­bon, where with his col­league Car­los San­tos (a grand­stu­dent of Richard Guy), he showed that the Hawaii­an game Kon­ane in­cludes all com­bin­at­or­i­al games. They have also had an enorm­ous im­pact on edu­ca­tion: in all pub­lic schools in Por­tugal, in­tro­duct­ory com­bin­at­or­i­al games are now a stand­ard part of the 5th grade cur­riculum.

Dav­id Wolfe (PhD 1992) co­pi­on­eered the ap­plic­a­tion of Com­bin­at­or­i­al Game The­ory to Go.

Dav­id Moews (PhD 1993) is a lead­ing crypt­ana­lyst at IDA.

Dan Gar­cia (MS 1995) has been teach­ing the largest and most pop­u­lar com­puter sci­ence classes at UC Berke­ley.

Mar­tin Mueller (PhD 1995 and IC­SI postdoc 1996–1997) is pro­fess­or of com­puter sci­ence at U. Al­berta. His stu­dents have in­cluded Dav­id Sil­ver (PhD 2009) and Aja Huang (Postdoc 2011–2), lead­ers of the ma­jor Ar­ti­fi­cial In­tel­li­gence break­through, Al­phaGo.

Amit Sahai (Hon­ors BS 1996) is Dir­ect­or of the UCLA Cen­ter for En­cryp­ted Func­tion­al­it­ies.

Ju­lia Kempe (PhD 2001) earned con­cur­rent PhDs from Berke­ley and U. Par­is! She pub­lished over 100 pa­pers on Quantum Com­pu­ta­tion and In­form­a­tion. She is now the Dir­ect­or of Cen­ter for Data Sci­ence at NYU.

Dav­id Des­Jardins (PhD 2001) be­came a lead­ing crypt­ana­lyst at IDA while on leave from gradu­ate school, then later an early em­ploy­ee of Google. He now leads his large phil­an­throp­ic fam­ily found­a­tion.

Aaron Siegel (PhD 2005 and MSRI postdoc 2005–2006) wrote the pre­vail­ing treat­ise on Com­bin­at­or­i­al Game The­ory. He now leads a group at AirB­nB.