return

Celebratio Mathematica

Joseph L. Doob

A conversation with Joe Doob

by J. Laurie Snell

Student days

Snell: How did you get in­ter­ested in math­em­at­ics?

Doob: I have al­ways wanted to un­der­stand what I was do­ing and why I was do­ing it, and I have of­ten been a pest be­cause I have ob­jec­ted when what I heard or read was not to be taken lit­er­ally. The boy who no­ticed that the em­per­or wasn’t dressed and ob­jec­ted loudly has al­ways been my mod­el. Math­em­at­ics seemed to match my psy­cho­logy, a mis­take re­flect­ing the fact that some­how I did not take in­to ac­count that math­em­at­ics is cre­ated by hu­mans.

I made a crys­tal ra­dio set in gram­mar school, be­came more and more in­ter­ested in ra­dio as I moved in­to high school, learned Morse code and built and op­er­ated a ra­dio trans­mit­ter after passing the li­cens­ing ex­am­in­a­tion. Thus it was nat­ur­al that I thought I would ma­jor in math­em­at­ic­al phys­ics in col­lege. On the oth­er hand, al­though I did very well in the first-year col­lege phys­ics course, it left me un­easy be­cause I nev­er felt I knew what was go­ing on, even though I could solve all the as­signed prob­lems. The fi­nal blow came when I re­gistered in a course in elec­tri­city, en­dured sev­er­al class ses­sions full of pic­tures of dies­el elec­tric lo­co­mot­ives and de­cided that if this was phys­ics I was desert­ing the sub­ject, and I trans­ferred to a math­em­at­ics course. This de­cision demon­strated that I had no idea of how to choose courses and was too much a loner to think of ask­ing for ad­vice. At any rate the res­ult was that in the first semester of my sopho­more year I was re­gistered in three math­em­at­ics courses.

Snell: That col­lege was Har­vard. Har­vard has al­ways had a great math­em­at­ics de­part­ment but today Har­vard is even in­volved in the cal­cu­lus re­form. I as­sume that was not the case in your day. What was Har­vard like when you were a stu­dent?

Doob in 1963 when he was President of the American Mathematical Society.

Doob: I knew noth­ing about col­lege edu­ca­tion or col­leges, but ap­plied to Har­vard on the ad­vice of my high school prin­cip­al, and was ac­cep­ted without ex­am­in­a­tion be­cause I was in the up­per sev­enth of my high school class. The Har­vard math­em­at­ics de­part­ment was one of the best re­search de­part­ments in the coun­try when I ar­rived in Septem­ber of 1926, but the Har­vard math­em­at­ics cur­riculum was ex­traordin­ar­ily slow paced. Fresh­man math was a semester of ana­lyt­ic geo­metry fol­lowed by a semester of cal­cu­lus. Sopho­more cal­cu­lus treated dif­fer­en­tial cal­cu­lus and a smat­ter­ing of in­teg­ra­tion. Ju­ni­or cal­cu­lus in­tro­duced par­tial de­riv­at­ives, in­teg­ra­tion of ra­tion­al func­tions, and mul­tiple in­teg­rals. In these three years there were a few re­marks about lim­its but no ep­si­lons. The first seni­or-gradu­ate ana­lys­is course was Ana­lyt­ic Func­tions and in­tro­duced ep­si­lon-delta meth­ods. The text was Os­good’s Funk­tion­en­the­or­ie and many stu­dents had to learn Ger­man, ep­si­lon-delta meth­ods and com­plex func­tion the­ory at the same time. In those days there were very few ad­vanced math­em­at­ics books in Eng­lish, and those in French and Ger­man were too ex­pens­ive for most stu­dents. My fresh­man cal­cu­lus sec­tion, taught by a gradu­ate stu­dent, was the only math­em­at­ics course I ever at­ten­ded that gave me pos­it­ive en­joy­ment. The ap­plic­a­tions of de­riv­a­tion thrilled me.

Wil­li­am Fogg Os­good I did not sus­pect that he was an in­ter­na­tion­ally fam­ous math­em­atician, and of course I had no idea of math­em­at­ic­al re­search, pub­lic­a­tion in re­search journ­als or what it took to be a uni­versity pro­fess­or. Os­good was a large, bearded portly gen­tle­man who took life and math­em­at­ics very ser­i­ously and walked up and down in front of the black­board mak­ing pon­der­ous state­ments. After a few weeks of his class I ap­pealed to my ad­viser, Mar­shall Stone, to get me in­to a cal­cu­lus sec­tion with a more lively teach­er. Of course Stone did not waste sym­pathy on a stu­dent who com­plained that a teach­er got on his nerves, and he ad­vised me that if I found sopho­more cal­cu­lus too slow I should take ju­ni­or cal­cu­lus at the same time!

That would put me in the ju­ni­or course after hav­ing missed its first weeks and without the back­ground of most of the sopho­more course, but the Har­vard pace was so slow that the sug­ges­tion was not ab­surd. I stayed and suffered in the sopho­more course and sim­ul­tan­eously sat in on the ju­ni­or course. When midterm ex­ams were giv­en I was still com­pletely lost in the ju­ni­or course but caught up dur­ing Christ­mas va­ca­tion. Through the for­tu­nate ac­ci­dent of hav­ing a te­di­ous in­struct­or I had gained a year! The ana­lyt­ic func­tion course, taken in my ju­ni­or year with Os­good as teach­er, was my first course in rig­or­ous ana­lys­is and I took to it right away in spite of his man­ner­isms.

Snell: You also did your gradu­ate work at Har­vard. What was that like?

Doob: When I gradu­ated in 1930 and it was time to think about a Ph.D. de­gree, I asked Stone to be my ad­viser. He told me he had no prob­lems for me, that I should go to J. L. Walsh, who al­ways had many prob­lems. Walsh ac­cep­ted me and we had a fine re­la­tion­ship: he nev­er bothered me, and con­versely. Har­vard suited my char­ac­ter in that there was so little su­per­vi­sion that I could neg­lect classes for a con­sid­er­able time while cul­tiv­at­ing a side in­terest, some­times math­em­at­ic­al some­times not. Moreover there was a math­em­at­ics read­ing room in the lib­rary build­ing, con­tain­ing a col­lec­tion in­de­pend­ent of the main math­em­at­ics col­lec­tion in the stacks. This room was an ideal base for a math­em­at­ics stu­dent who wanted to get an idea of what math was all about. Even the fact that the Har­vard lib­rary was then badly run had its ad­vant­ages. I soon found out that if I re­ques­ted a book at the main desk, the book would fre­quently not be found, but that one could al­ways find in­ter­est­ing books by wan­der­ing around in the stacks. The de­fects of the lib­rary ad­vanced my gen­er­al edu­ca­tion.

Wladi­mir Seidel was a young Ph.D. at Har­vard when I was there. We dis­cussed a lemma he needed for a pa­per he was writ­ing on the cluster val­ues of an ana­lyt­ic func­tion at a bound­ary point of its disk do­main of defin­i­tion. If we had known more about the Pois­son in­teg­ral, we would have real­ized that the prob­lem was trivi­al. I worked out a com­plic­ated it­er­at­ive geo­met­ric pro­ced­ure to solve the prob­lem, and he thanked me in a foot­note, the first pub­lished ref­er­ence to me. This epis­ode is a fine ex­ample of the value of ig­nor­ance. If I had known more about the Pois­son in­teg­ral, I would have poin­ted out the proof of his lemma to Seidel and noth­ing more would have come of it. As it was, the lemma made me think about the re­la­tion between ana­lyt­ic func­tions and their lim­it val­ues at the bound­ar­ies of their do­mains, and led to my doc­tor’s thes­is.

When I fin­ished what I hoped was my thes­is I showed it to Walsh, who did not read it but asked Seidel, who had not read it, what he thought about it. Seidel said it was fine and that was that; the thes­is was ap­proved and I got my doc­tor’s de­gree in 1932. Get­ting a Ph.D. in two years left me woe­fully ig­nor­ant of al­most everything in math­em­at­ics not con­nec­ted with my thes­is work. I had missed fer­tile con­tact with Birk­hoff, Kel­logg and Morse, all three at Har­vard and lead­ers in their fields. But I had be­nefited from Stone by way of typ­ing for pay, and in­cid­ent­ally read­ing, most of his Lin­ear Trans­form­a­tions in Hil­bert Space, from which I learned much that was use­ful to me in later re­search.

An­oth­er use­ful thing I learned was ed­it­or­i­al tech­nique. I sent part of my thes­is to the AMS Trans­ac­tions. The Ed­it­or (J. D. Tamar­kin) wrote me that he had read the first sec­tion of my pa­per, that it was OK and that he had not read the second sec­tion but it was too long. Since I did my own typ­ing the solu­tion was simple: I re­typed the pa­per with smal­ler mar­gins and each time I went from one line to the next I turned the roller back a bit to de­crease the double line spa­cing. (Word pro­cessors, which would have sim­pli­fied such an op­er­a­tion, un­for­tu­nately had not yet been in­ven­ted.) The new ver­sion was ac­cep­ted with no fur­ther ob­jec­tion. Tamar­kin’s ed­it­or­i­al re­port had not been a com­plete sur­prise to me be­cause I had writ­ten a “minor thes­is” for Har­vard, a non­re­search Ph.D. re­quire­ment, which was ac­cep­ted be­fore I no­ticed that I had some­how omit­ted turn­ing in one of the middle pages. Pro­fess­or J. L. Coolidge read the thes­is care­fully enough to no­tice that his name was re­ferred to but not care­fully enough to no­tice that there was a skipped page num­ber. A friendly young pro­fess­or, H. W. Brink­mann, later secretly in­ser­ted the miss­ing page for me.

From complex variables to probability

Snell: How did you find your way from com­plex vari­ables the­ory to prob­ab­il­ity the­ory?

Doob: One sum­mer in my gradu­ate stu­dent years I sat in on a course in aes­thet­ic meas­ure, giv­en by Birk­hoff. He had de­veloped a for­mula which gave a nu­mer­ic­al value to works of art. Birk­hoff was a first-rate math­em­atician, but it was nev­er clear wheth­er what would get high num­bers was what he liked or what se­lec­ted in­di­vidu­als liked, and, if the lat­ter, what sort of in­di­vidu­als were se­lec­ted. One bo­hemi­an type stu­dent frus­trated him by pre­fer­ring ir­reg­u­lar to reg­u­lar designs, and in des­pair he told her that she was ex­cep­tion­al. In my youth­ful brash­ness I kept chal­len­ging him on the ab­sence of defin­i­tions and he fi­nally came to class one day, care­fully fo­cused his eyes on the ceil­ing and said that those not re­gistered in the class really had no right to at­tend. I took the hint and at­ten­ded no fur­ther classes. But Birk­hoff bore me no grudge, and when I was won­der­ing what to do after get­ting my Ph.D. he said I should go on the re­search cir­cuit. It was ob­vi­ously through his in­flu­ence that I was giv­en a Na­tion­al Re­search Coun­cil Fel­low­ship for two years, to work at Columbia Uni­versity with J. F. Ritt. Ritt was a good math­em­atician, but his work was not in my field. I had chosen Columbia Uni­versity be­cause my wife was a med­ic­al stu­dent in New York.

Ritt and I pub­lished a joint pa­per, to which I had made two con­tri­bu­tions: (1) I typed it (he had mar­ried a pro­fes­sion­al typ­ist who agreed to type his math­em­at­ics if he would get an of­fice type­writer in­stead of his port­able, a con­ces­sion he re­fused to make); (2) I con­trib­uted the ad­ject­ive “lex­ico­graph­ic­al” to an or­der he had de­vised.

I have a poor memory, and cul­tur­al read­ing of math­em­at­ics has nev­er been of use to me. I have been a re­view­er for Math­em­at­ic­al Re­views and a ref­er­ee for vari­ous journ­als, but the pa­pers I read in car­ry­ing out these du­ties were im­me­di­ately for­got­ten. This memory lack meant that my math­em­at­ic­al back­ground has been quite su­per­fi­cial, re­stric­ted to the con­text of my own re­search. Paul Lévy once told me that read­ing oth­er writers’ math­em­at­ics gave him phys­ic­al pain. I was not so sens­it­ive, but read­ing did me no good un­less I was car­ry­ing on re­search re­lated to the read­ing, and even then it took me a long time to get the ma­ter­i­al in a form I could un­der­stand and re­mem­ber. Be­cause of these char­ac­ter­ist­ics I have nev­er been able to ac­com­plish any­thing math­em­at­ic­ally when I did not have a def­in­ite pro­gram. In New York, aside from ex­ploit­ing fur­ther the ideas of my Ph.D. thes­is I was wast­ing my time. I de­cided that if I was to go fur­ther in com­plex vari­able the­ory I would have to get in­to to­po­logy, and for some reas­on I was re­luct­ant to do this. Fur­ther­more I was de­mor­al­ized by the deep de­pres­sion. The streets were full of un­em­ployed ask­ing for handouts or selling apples to make a make a few cents, and no jobs were open­ing up either in in­dustry or aca­demia. After two years in New York I still had no job pro­spects, even though I hu­mi­li­ated my­self trot­ting after big shots at AMS meet­ings. B. O. Koop­man at Columbia told me that I should ap­proach Har­old Ho­telling, Pro­fess­or of Stat­ist­ics there, that there was money in prob­ab­il­ity and stat­ist­ics. Ho­telling said he could get me a Carne­gie Cor­por­a­tion grant to work with him, and thus the force of eco­nom­ic cir­cum­stances got me in­to prob­ab­il­ity.

Doob receiving the National Medal of Science from President Carter in 1979.

Snell: Of course, there were not stand­ard books on prob­ab­il­ity the­ory in those days. How did you go about learn­ing prob­ab­il­ity?

Doob: Poin­caré wrote in 1912 that one could scarcely give a sat­is­fact­ory defin­i­tion of prob­ab­il­ity. One can­not tell wheth­er he was think­ing of math­em­at­ic­al or non­mathem­at­ic­al con­texts or wheth­er he dis­tin­guished between them. The dis­tinc­tion is fre­quently ig­nored even now. It was not clear in the early 1930s that there could be a math­em­at­ic­al coun­ter­part, at the same level as the rest of math­em­at­ics, of the non­mathem­at­ic­al con­text ad­orned with the name “prob­ab­il­ity.” In 1935, Egon Pear­son told me that prob­ab­il­ity was so linked with stat­ist­ics that, al­though it was pos­sible to teach prob­ab­il­ity sep­ar­ately, such a pro­ject would just be a tour de force.

I be­came so ri­gidly in­tol­er­ant of the cur­rent loose lan­guage that I ig­nored the text­books, and I un­der­stood the in­ter­pret­a­tion of the Birk­hoff er­god­ic the­or­em as the strong law of large num­bers for a sta­tion­ary se­quence of ran­dom vari­ables be­fore I knew the Cheby­shev in­equal­ity proof of the weak law of large num­bers for a se­quence of mu­tu­ally in­de­pend­ent identic­ally dis­trib­uted ran­dom vari­ables! On the oth­er hand Koop­man, who showed me that proof and who was a pi­on­eer in er­god­ic the­ory, did not real­ize that the er­god­ic the­or­em had any­thing to do with prob­ab­il­ity un­til Norbert Wien­er and I told him the con­nec­tion at an Amer­ic­an Math­em­at­ic­al So­ci­ety meet­ing.

Kolmogorov’s 1933 mono­graph on the found­a­tions of (math­em­at­ic­al) prob­ab­il­ity ap­peared just when I was des­per­ately try­ing to find out what the sub­ject was all about. He gave meas­ure the­or­et­ic defin­i­tions of prob­ab­il­ity, of ran­dom vari­ables and their ex­pect­a­tions and of con­di­tion­al ex­pect­a­tions. He also con­struc­ted prob­ab­il­ity meas­ures in in­fin­ite-di­men­sion­al co­ordin­ate spaces. Kolmogorov did not state that the set of co­ordin­ate vari­ables of such a space con­sti­tutes a mod­el for a col­lec­tion of ran­dom vari­ables with giv­en com­pat­ible joint dis­tri­bu­tions, and I am ashamed to say that I com­pletely missed the point of that sec­tion of his mono­graph, only real­iz­ing it after I had con­struc­ted some in­fin­ite-di­men­sion­al product meas­ures in the course of my own re­search. Kolmogorov defined a ran­dom vari­able as a meas­ur­able func­tion on a prob­ab­il­ity meas­ure space, but there is a wide gap between ac­cept­ing a defin­i­tion and tak­ing it ser­i­ously. It was a shock for prob­ab­il­ists to real­ize that a func­tion is glor­ied in­to a ran­dom vari­able as soon as its do­main is as­signed a prob­ab­il­ity dis­tri­bu­tion with re­spect to which the func­tion is meas­ur­able. In a 1934 class dis­cus­sion of bivari­ate nor­mal dis­tri­bu­tions Ho­telling re­marked that zero cor­rel­a­tion of two jointly nor­mally dis­trib­uted ran­dom vari­ables im­plied in­de­pend­ence, but it was not known wheth­er the ran­dom vari­ables of an un­cor­rel­ated pair were ne­ces­sar­ily in­de­pend­ent. Of course he un­der­stood me at once when I re­marked after class that the in­ter­val \( [0,2\pi] \) when en­dowed with Le­besgue meas­ure di­vided by \( 2\pi \) is a prob­ab­il­ity meas­ure space, and that on this space the sine and co­sine func­tions are un­cor­rel­ated but not in­de­pend­ent ran­dom vari­ables. He had not di­ges­ted the idea that a tri­go­no­met­ric func­tion is a ran­dom vari­able re­l­at­ive to any Borel prob­ab­il­ity meas­ure on its do­main. The fact that non­prob­ab­il­ists com­monly de­note func­tions by \( f \), \( g \) and so on, where­as prob­ab­il­ists tend to call func­tions ran­dom vari­ables and use the nota­tion \( x \), \( y \) and so on at the oth­er end of the al­pha­bet, helped to make non­prob­ab­il­ists sus­pect that math­em­at­ic­al prob­ab­il­ity was ho­cus po­cus rather than math­em­at­ics. And the fact that prob­ab­il­ists called some in­teg­rals “ex­pect­a­tions” and used the let­ter \( E \) or \( M \) in­stead of in­teg­ral signs strengthened the sus­pi­cion.

My total ig­nor­ance of the field made me look at prob­ab­il­ity without tra­di­tion­al blinders. I can­not give a math­em­at­ic­ally sat­is­fact­ory defin­i­tion of non­mathem­at­ic­al prob­ab­il­ity. For that mat­ter I can­not give a math­em­at­ic­ally sat­is­fact­ory defin­i­tion of a non­mathem­at­ic­al chair. The very idea of treat­ing real life as math­em­at­ics seems in­ap­pro­pri­ate to me. But a guid­ing prin­ciple in my work has been the idea that every non­mathem­at­ic­al prob­ab­il­ist­ic as­ser­tion sug­gests a math­em­at­ic­al coun­ter­part which sharpens the for­mu­la­tion of the non­mathem­at­ic­al as­ser­tion and may also have in­de­pend­ent math­em­at­ic­al in­terest. This prin­ciple first led me to the (rather trivi­al) math­em­at­ic­al the­or­em cor­res­pond­ing to the fact that ap­ply­ing a sys­tem of gambling, in which in­de­pend­ent identic­ally dis­trib­uted plays to bet on are chosen without fore­know­ledge but oth­er­wise ar­bit­rar­ily, does not change the odds. Much later, the idea that a fair game re­mained fair un­der op­tion­al sampling led me to mar­tin­gale the­ory ideas.

The University of Illinois

Joe Doob in 1974 when he became a member of the board of trustees of the Institute for Advanced Study.

Snell: Your first, and last, reg­u­lar teach­ing job was at the Uni­versity of Illinois. How did you end up there and what was it like in the early days at Illinois?

Doob: After three years of fel­low­ships I fi­nally re­ceived a job of­fer, from the Uni­versity of Illinois in Urb­ana — rank of As­so­ci­ate, that is, In­struct­or’s pay for As­sist­ant Pro­fess­or du­ties. I was charmed by the small-town at­mo­sphere of Urb­ana as soon as I ar­rived and nev­er wanted to leave, even though the at­mo­sphere changed through the years. I had nev­er done any teach­ing but found teach­ing cal­cu­lus to fresh­men to be fun, once I had found out how to teach with a min­im­um of pa­per grad­ing and pre­par­a­tion. In those days I could go in­to a class, ask where we were and go on from there. This tech­nique un­for­tu­nately be­came less prac­tic­al as my ar­ter­ies hardened.

At first the ad­vanced courses I gave were the bread-and-but­ter courses in real and com­plex vari­ables. I had nev­er worked out for my­self or read any sys­tem­at­ic ap­proach to prob­ab­il­ity and had no feel­ing for what would be an ap­pro­pri­ate se­quence of top­ics. There was, however, pres­sure to teach prob­ab­il­ity be­cause Paul Hal­mos and War­ren Am­brose chose me as thes­is ad­viser. They were both good enough to be guided through out­side read­ing, but there was ac­tu­ally not much read­ing that I felt was ad­equate. A de­cent course based on the meas­ure the­ory taught in those days had to dis­cuss meas­ures on ab­stract spaces, Borel meas­ur­able func­tions and the Radon–Nikodym the­or­em. And when I fi­nally was pushed in­to teach­ing prob­ab­il­ity it was ne­ces­sary to learn first and then dis­cuss in de­tail such ele­ment­ary, but new to me, sub­jects as Bernoulli dis­tri­bu­tions and Stirl­ing’s for­mula.

Then and later the most em­bar­rass­ing prob­ab­il­ity class lec­ture was the first, in which I tried to give a sat­is­fy­ing ac­count of what hap­pens when one tosses a coin. (A fam­ous stat­ist­i­cian told me that he solves the dif­fi­culty by nev­er men­tion­ing the con­text.) One wants to talk about a lim­it of a fre­quency, but “lim­it” has no mean­ing un­less an in­fin­ite se­quence is in­volved, and an in­fin­ite se­quence is not an em­pir­ic­al concept. I made vague and heav­ily hedged re­marks such as that the ra­tio I would like to have lim­it l/2 “seems to tend to 1/2,” that the coin toss­er “would be very much sur­prised if the ra­tio is not nearly 1/2 after a large num­ber of tosses” and so on.

The stu­dents nev­er seemed to be bothered by my vague­ness. For that mat­ter pro­fes­sion­als who write about the sub­ject are usu­ally also un­bothered, per­haps be­cause they nev­er seem to be toss­ing real coins in a real world un­der the in­flu­ence of New­ton’s laws, which some­how are not men­tioned in the writ­ing.

Writing the stochastic processes book

Snell: Few math­em­at­ic­al books have had the in­flu­ence that Stochast­ic Pro­cesses [1] has had. How did you come to write this book?

Doob: In 1942, Veblen re­cruited me among oth­ers to go to Wash­ing­ton to work for the navy, in mine war­fare. The work needed know­ledge of ele­ment­ary phys­ics, ed­it­or­i­al ex­per­i­ence and com­mon sense. The last, I found, was the rarest of the three and made me re­luct­ant to ap­ply ad­vanced math­em­at­ics to prac­tic­al prob­lems with im­pre­cise data. The prob­lems were fas­cin­at­ingly dif­fer­ent from my math­em­at­ic­al work but I was ne­ces­sar­ily an out­sider, and I was bored in­tel­lec­tu­ally. What I needed was a math­em­at­ic­al pro­ject I could work on without a lib­rary. I wrote a pot boil­er on the com­pon­ent pro­cesses of \( N \)-di­men­sion­al Gaus­si­an Markov pro­cesses, and then She­whart came to my res­cue in 1945, when he in­vited me to write a book for the Wiley series in stat­ist­ics. I de­cided to write a book on stochast­ic pro­cesses and that I would get Norbert Wien­er to write a sec­tion on their ap­plic­a­tion to elec­tric­al en­gin­eer­ing. I knew noth­ing about such ap­plic­a­tions but had had sev­er­al con­tacts with Wien­er and knew that he was in­volved with elec­tric­al en­gin­eers at MIT. On the oth­er hand I had a copy of his — clas­si­fied “Con­fid­en­tial” (!) — 1942 mono­graph (Ex­tra­pol­a­tion In­ter­pol­a­tion and Smooth­ing of Sta­tion­ary Time Series with En­gin­eer­ing Ap­plic­a­tions) and was wor­ried that it was so vague on the en­gin­eer­ing ap­plic­a­tions. It cheered me slightly that he had a ma­chine at MIT which pur­portedly was sig­ni­fic­ant for an­ti­air­craft gun­nery. But when Feller and I vis­ited him we found only a won­der­ful toy based on a mov­ing spot­light con­trolled by a delayed ac­tion lever. Feller and I played with it for a few minutes and man­aged to put it out of com­mis­sion.

I star­ted the book in Wash­ing­ton, do­ing only top­ics I could handle at home without a lib­rary. In 1946, when I was back in Urb­ana, Wien­er vis­ited Urb­ana to help ded­ic­ate the new elec­tric­al en­gin­eer­ing build­ing. It turned out that his idea of con­trib­ut­ing to my book was to walk up and down on my porch mak­ing gen­er­al re­marks on com­mu­nic­a­tion the­ory, re­marks which pre­sum­ably I was to work up. I had great re­spect for Wien­er’s work in prob­ab­il­ity and now have even more for his fun­da­ment­al work in po­ten­tial the­ory, but I did not see any sub­stance in his re­marks and del­ic­ately per­suaded him that what he was talk­ing about was not quite suit­able for my book. The only res­ult of his tem­por­ary role was that I in­ser­ted a couple of chapters on pre­dic­tion the­ory in the book. They are some­what out of char­ac­ter with the rest of the book but I had put so much work in­to get­ting them in­to what I thought was reas­on­able form that I did not have the heart to omit them. I wanted to re­move the mys­tery from a straight­for­ward prob­lem of least squares ap­prox­im­a­tion, largely solved by Szegö in 1920 and jazzed up by the prob­ab­il­ist­ic in­ter­pret­a­tion.

I in­ten­ded to min­im­ize ex­pli­cit meas­ure the­ory in the book be­cause many prob­ab­il­ists were com­plain­ing that meas­ure the­ory was killing the charm of their sub­ject without con­trib­ut­ing any­thing new. My idea was to as­sume as known the stand­ard op­er­a­tions on ex­pect­a­tions and con­di­tion­al ex­pect­a­tions and not even use the nasty word “meas­ure.” This idea got me in­to trouble. My cir­cum­lo­cu­tions soon be­came so ob­scure that the text be­came un­read­able and I was forced to make the meas­ure the­ory ex­pli­cit. I joked in my in­tro­duc­tion that the un­read­ab­il­ity of my fi­nal ver­sion might give read­ers an idea of that of the first ver­sion, but like so many of my jokes it fell flat. I was grate­ful that at least J. W. T. Youngs no­ticed it, but I was less grate­ful that it ap­par­ently mys­ti­fied the Rus­si­an trans­lat­ors of the book, who simply omit­ted it.

As it turned out, one of the main ac­com­plish­ments of my book was to make prob­ab­il­ity the­ory math­em­at­ic­ally re­spect­able by es­tab­lish­ing the key role of meas­ure the­ory not just in the ba­sic defin­i­tions but also in the fur­ther work­ing out. More pre­cisely it be­came clear, or should have be­come clear, that math­em­at­ic­al prob­ab­il­ity is simply a spe­cial­iz­a­tion of meas­ure the­ory. I must ad­mit, however, that, al­though every math­em­atician clas­si­fies meas­ure the­ory as a part of ana­lys­is, many prob­ab­il­ists con­sider that a study of sample func­tions is “prob­ab­il­ity,” where­as a study of dis­tri­bu­tions of ran­dom vari­ables is “ana­lys­is.” This dis­tinc­tion mys­ti­fies me. While writ­ing my book I had an ar­gu­ment with Feller. He as­ser­ted that every­one said “vari­able” and I as­ser­ted that every­one said “chance vari­able.” We ob­vi­ously had to use the same name in our books, so we de­cided the is­sue by a stochast­ic pro­ced­ure. That is, we tossed for it and he won.

I wrote my Stochast­ic Pro­cesses book in the way I have al­ways writ­ten math­em­at­ics. That is, I wrote with only a vague idea of what I was to cov­er. I had no idea I would sweat blood work­ing up new in­equal­it­ies for char­ac­ter­ist­ic func­tions of ran­dom vari­ables in or­der to make straight­for­ward the de­riv­a­tion of the Lévy for­mula for the char­ac­ter­ist­ic func­tion of an in­fin­itely di­vis­ible dis­tri­bu­tion. It was only a long time after I star­ted that I de­cided it would be ab­surd to in­clude con­ver­gence of sums of mu­tu­ally in­de­pend­ent ran­dom vari­ables and the cor­res­pond­ing lim­its of av­er­ages (laws of large num­bers) without also in­clud­ing the ana­log­ous res­ults for con­ver­gence of sums of mu­tu­ally or­tho­gon­al ran­dom vari­ables and the cor­res­pond­ing lim­its of av­er­ages. And I had no idea ahead of time how the mar­tin­gale dis­cus­sion would de­vel­op.

After the book was pub­lished in 1953, I thought that the pop­ular­ity of mar­tin­gale the­ory was be­cause of the catchy name “mar­tin­gale,” just as every­one was in­trigued by my pro­pos­al (which ac­tu­ally nev­er came to any­thing, al­though fin­an­cing was avail­able) that the Uni­versity of Illinois should spon­sor a prob­ab­il­ity in­sti­tute, to be called the “Prob­sti­tute.” Of course mar­tin­gale the­ory had so many ap­plic­a­tions in and out­side of prob­ab­il­ity that it had no need of the catchy name.

When the Stochast­ic Pro­cesses book came out I had the best pos­sible proof that it ac­tu­ally was read care­fully: a bliz­zard of let­ters ar­rived point­ing out mis­takes. My second book on prob­ab­il­ity and po­ten­tial the­ory had no such re­cep­tion.

Martingales

Snell: Your Stochast­ic Pro­cesses book es­tab­lished mar­tin­gales as one of the small num­ber of im­port­ant types of stochast­ic pro­cesses. How did you get in­ter­ested in mar­tin­gales?

Doob: When I star­ted to study prob­ab­il­ity one of my goals was to ob­tain math­em­at­ic­al state­ments and proofs of com­mon prob­ab­il­ist­ic as­ser­tions which had not yet been prop­erly for­mu­lated. One of the first the­or­ems I proved in pur­su­ing this pro­gram was a for­mu­la­tion of the fact that, in the con­text of in­de­pend­ent plays with a com­mon dis­tri­bu­tion, no sys­tem of bet­ting in which the plays to bet on de­pend on the res­ults of pre­vi­ous plays changes the odds. This res­ult was one of the first to make a prop­erly defined ran­dom time an es­sen­tial fea­ture of a math­em­at­ic­al dis­cus­sion. Von Mises had pos­tu­lated a ver­sion of this res­ult in an at­tempt to put prob­ab­il­ity as ap­plied to a se­quence of in­de­pend­ent tri­als on a rig­or­ous math­em­at­ic­al basis. His ver­sion was sug­gest­ive but it was not math­em­at­ics.

I was giv­en Jean Ville’s 1939 book to re­view, in which he did not form­ally define a mar­tin­gale but proved the max­im­um in­equal­ity for a mar­tin­gale se­quence and used it to prove the strong law of large num­bers for Bernoulli tri­als. His work in­trigued me and, once I had for­mu­lated the mar­tin­gale defin­i­tion, the fact that the defin­i­tion sug­gests a ver­sion of the idea of a fair game sug­ges­ted the in­tro­duc­tion of what are now called op­tion­al times and the de­riv­a­tion of con­di­tions for which sampling of a mar­tin­gale se­quence at an in­creas­ing se­quence of op­tion­al times pre­serves the mar­tin­gale prop­erty. This in­vest­ig­a­tion in turn led to the idea of a meas­ur­able space filtered by an in­creas­ing se­quence of sigma al­geb­ras of meas­ur­able sets, suc­cess­ive pasts of a pro­cess, which has proved very fruit­ful. I did not ap­pre­ci­ate the power of mar­tin­gale the­ory un­til I worked on it in the course of writ­ing my 1953 book, but the vague idea that if one knows more and more about something one has a mono­tone con­text in some sense, and thus there ought to be con­ver­gence, sug­gests that un­der ap­pro­pri­ate ana­lyt­ic con­di­tions a mar­tin­gale se­quence should con­verge. I did not real­ize when I star­ted that, long be­fore I stud­ied mar­tin­gale se­quences, they had been stud­ied by Serge Bern­stein, Lévy and Kolmogorov. The mar­tin­gale defin­i­tion led at once to the idea of sub- and su­per­martin­gales, and it was clear that these were the ap­pro­pri­ate names but, as I re­marked in my 1984 book [2], the name “su­per­martin­gale” was spoiled for me by the fact that every even­ing the ex­ploits of “Su­per­man” were played on the ra­dio by one of my chil­dren. If I had been do­ing my work at the uni­versity rather than at home, I am sure I would not have used the ri­dicu­lous names semi- and lower se­mi­martin­gales for sub- and su­per­martin­gales in my 1953 book. Per­haps I should have noted that one reas­on for the suc­cess of that book is the pres­ti­gi­ous-sound­ing title, a trans­la­tion of a name in a Ger­man Kh­intchine pa­per.

Research and publication

Joe always worked at home. Here he is working on his potential theory book [2].

Snell: Since I was a stu­dent when you were writ­ing your Stochast­ic Pro­cesses book, I got a pre­view.

I re­mem­ber two things that amazed me. One was that you typed all sev­en ver­sions (pick-punch) and an­oth­er is that it did not have a lot of ex­amples.

Doob: My in­clin­a­tion has al­ways been to look for gen­er­al the­or­ies and to avoid com­pu­ta­tion. A dis­cus­sion I once had with Feller in a New York sub­way il­lus­trates this at­ti­tude and its lim­it­a­tions. We were dis­cuss­ing the Markov prop­erty and I re­marked that the Chap­man–Kolmogorov equa­tion did not make a pro­cess Markovi­an. This state­ment sat­is­fied me, but not Feller, who liked com­pu­ta­tion and ex­amples as well as the­ory. It was char­ac­ter­ist­ic of our at­ti­tudes that at first he did not be­lieve me but then went to the trouble of con­struct­ing a simple ex­ample to prove my as­ser­tion.

Feller was the first math­em­at­ic­al prob­ab­il­ist I had ever met and, meet­ing him at a Dart­mouth meet­ing of the AMS around 1940, I felt like Liv­ing­ston when Stan­ley found him in Africa. I en­vied the Rus­si­an prob­ab­il­ity group, but Kolmogorov, who in­cluded stat­ist­i­cians among the prob­ab­il­ists, told me around that time how he en­vied the fact that the U.S. had so many prob­ab­il­ists!

I cor­res­pon­ded with many math­em­aticians but nev­er had de­tailed in­ter­play with any but Kai Lai Chung and P.-A. Mey­er in prob­ab­il­ity and Brelot in po­ten­tial the­ory. My in­stincts were to work alone and even to col­lect enough books and re­prints so that I could do all my work at home. My memory was so bad that I had dif­fi­culty dis­cuss­ing even my own res­ults with oth­er math­em­aticians.

My sys­tem of writ­ing math­em­at­ics, wheth­er a re­search pa­per or a book, was to write ma­ter­i­al longhand, with many eras­ures, with only a vague idea of what would be in­cluded. I would see where the math led me just as some nov­el­ists are said to let the char­ac­ters in their books de­vel­op on their own, and I would get so tired of the sub­ject after work­ing on it for a while that I would start typ­ing the ma­ter­i­al be­fore it had as­sumed fi­nal form. Thus in writ­ing even a short pa­per I would start typ­ing the be­gin­ning be­fore I knew what res­ults I would get at the end. Ori­gin­ally I wrote in ink, ap­ply­ing ink erad­ic­at­or as needed. Feller vis­ited me once and told me he used pen­cil. We ar­gued the is­sue, but the next time we met we found that each had con­vinced the oth­er: he had switched to ink and I to pen­cil.

My sys­tem, com­plic­ated by my in­ac­cur­ate typ­ing, led to re­typ­ing ma­ter­i­al over and over, and for some time I had an elec­tric drill on my desk, provided with an eraser bit which I used to erase typ­ing. I rarely used the sys­tem of brush­ing white flu­id over a typed er­ror be­cause I was not pa­tient enough to let the flu­id dry be­fore re­typ­ing. Long after my first book was done I dis­covered the tape rolls which cov­er lines of type. As I typed and re­typed my work it be­came so re­pug­nant to me that I had more and more dif­fi­culty even to look at it to check it. This fact ac­counts for many slips that a care­ful read­ing would have dis­covered. I com­monly used a stochast­ic sys­tem of check­ing, pick­ing a page and then a place on the page at ran­dom and read­ing a few sen­tences, in or­der to avoid read­ing it in con­text and thereby to avoid read­ing what was in my mind rather than what I had writ­ten. At first I would catch something at al­most every tri­al, and I would con­tin­ue un­til sev­er­al tri­als would yield noth­ing. I have tried this sys­tem on oth­er au­thors, bet­ting, for ex­ample, that I would find something to cor­rect on a ran­domly chosen prin­ted page of text, and non­mathem­aticians suf­fer­ing un­der the de­lu­sion that math­em­at­ics is er­ror­less would be sur­prised at how many bets I have won.

To my mind, the most bor­ing part of math­em­at­ic­al re­search is the work in­volved in mak­ing his­tor­ic­al re­marks, and I al­ways de­ferred that work to the last mo­ment. That ex­plains why my first two books have his­tory in ap­pen­dices, and the third has prac­tic­ally no ref­er­ences whatever. After writ­ing my Stochast­ic Pro­cesses I swore, “Nev­er again! No more books!” Many years later, however, it seemed to me that the lit­er­at­ure on clas­sic­al po­ten­tial the­ory and its prob­ab­il­ity con­nec­tions was so scattered that something should be done about it, and that ac­counts for my po­ten­tial the­ory book [2], after the writ­ing of which I re­newed my earli­er oath on book writ­ing. But then after I re­tired I dis­covered com­puters, and — ever a gad­get­eer — I was charmed by them but could find no ex­cuse to buy one. When I dis­cussed this prob­lem with a re­tired phys­i­cist he told me he had the con­tents of his re­fri­ger­at­or lis­ted in his com­puter, and of course this meant he had daily changes. This was not much en­cour­age­ment, but fi­nally I had an in­spir­a­tion: if I could bring my­self to write a third book, that would jus­ti­fy buy­ing a com­puter. I had donated all my books and re­prints to the De­part­ment of Stat­ist­ics, so any book I wrote, work­ing at home as usu­al, would have to be on a sub­ject I knew very well that would not re­quire vis­it­ing the cam­pus to con­sult the lib­rary. I had taught meas­ure the­ory sev­er­al times and had my own ideas on how to de­vel­op the sub­ject, ideas I had not used in my teach­ing, so I de­cided to make a com­prom­ise with my sol­emn oaths and write up meas­ure the­ory for my own amuse­ment, not for pub­lic­a­tion. In par­tic­u­lar I wanted to in­teg­rate prob­ab­il­ist­ic ideas in­to stand­ard meas­ure the­ory, and I wanted to make sys­tem­at­ic use of met­ric space ideas in meas­ure the­ory. So I bought the simplest Macin­tosh com­puter and the word pro­cessor Mi­crosoft Word. After fre­quent con­sulta­tions and frantic tele­phone ap­peals for help to Hal­mos in Cali­for­nia and Snell in New Hamp­shire, I had learned all the Word tricks I needed, in­clud­ing the rather mys­ter­i­ous sys­tem at the back of the Word manu­al for writ­ing math­em­at­ic­al ex­pres­sions, but the news of my writ­ing had got out, and I was in­vited to pub­lish it. This meant that I had to go over my ma­ter­i­al with more care than I had in­ten­ded, and sure enough I found many ser­i­ous er­rors, but the book was fi­nally done and pub­lished. I am sure that Meas­ure The­ory [3] is my last book, if for no oth­er reas­on than that at 87 I am now in­cap­able of con­cen­trated work and no longer think ser­i­ously about math­em­at­ics. Long ago, after hear­ing lec­tures by math­em­aticians who should have quit while they were ahead, I re­solved to give no more lec­tures. The present maun­der­ing il­lus­trates how right I was and that in ad­di­tion I should have re­solved to do no more writ­ing.

Potential theory

Snell: Your men­tion of your po­ten­tial the­ory book [2] re­minds me that you went full circle from com­plex vari­able the­ory to prob­ab­il­ity and then back to com­plex vari­able the­ory. How did you be­come in­ter­ested in po­ten­tial the­ory?

Doob: As I re­marked earli­er, my first con­tact with rig­or­ous ana­lys­is was a com­plex vari­able the­ory course taught by Os­good, us­ing his Funk­tion­en­the­or­ie. It is a sign of the back­ward­ness of that the­ory that for many years \( f \) de­noted a func­tion out­side the the­ory but \( f(z) \) de­noted a func­tion of a com­plex vari­able. Also I was taught that \( f \) had a de­riv­at­ive at \( w \) if the usu­al dif­fer­ence quo­tient had a lim­it at \( w \) when \( z \) ap­proached \( w \) no mat­ter how \( z \) ap­proached \( w \). That qual­i­fic­a­tion was still con­sidered ne­ces­sary in 1927! At any rate I was charmed by the sub­ject and liked the text. I still do.

Kak­utani’s 1944 prob­ab­il­ist­ic treat­ment of the Di­rich­let prob­lem com­bined two of my in­terests, com­plex vari­able the­ory and prob­ab­il­ity, and I de­cided to try to de­vel­op their in­ter­re­la­tions fur­ther. I soon found that func­tions hav­ing cer­tain av­er­age prop­er­ties, such as har­mon­ic and subhar­mon­ic func­tions, would play a key role and that these av­er­age prop­er­ties sug­ges­ted the ap­plic­a­tion of mar­tin­gale the­ory.

When I was in­vited to speak at the 1955 Berke­ley Sym­posi­um on Prob­ab­il­ity and Stat­ist­ics and had noth­ing to say, I ar­rived a few days early in Berke­ley with an open mind and a port­able type­writer. I de­cided to ful­fill my Sym­posi­um ob­lig­a­tion by de­fin­ing a form of what is now called ax­io­mat­ic po­ten­tial the­ory, gen­er­al­iz­ing har­mon­ic, subhar­mon­ic and su­per­har­mon­ic func­tions in­to func­tions defined on an ab­stract space and sat­is­fy­ing av­er­age prop­er­ties sug­ges­ted by those sat­is­fied by these func­tions. This pos­tu­la­tion­al ap­proach was re­lated to earli­er work by oth­er re­search­ers, whose work I did not know at the time, but had not been linked to prob­ab­il­ity. Ax­io­mat­ic po­ten­tial the­ory has had an enorm­ous ex­pan­sion since those days. I soon found out that I had bet­ter learn more about clas­sic­al po­ten­tial the­ory and stud­ied the fun­da­ment­al work of Brelot, Cartan and Deny. My habit of tak­ing defin­i­tions ser­i­ously sug­ges­ted that Cartan’s fine to­po­logy should be ap­plied in de­tail, and I de­veloped it fur­ther and used it in study­ing lim­its of func­tions at the bound­ar­ies of their do­mains of defin­i­tion. I thought then and still think that the fine to­po­logy should have ap­plic­a­tions in com­plex vari­able the­ory be­sides the ap­plic­a­tion to the Fatou bound­ary lim­it the­or­em.

Of course I knew that Le­besgue’s the­or­em on the de­riv­a­tion of a meas­ure on the line re­l­at­ive to Le­besgue meas­ure had been gen­er­al­ized to de­riv­a­tion of any Borel meas­ure on the line re­l­at­ive to a second one. This led me to won­der why the Fatou bound­ary lim­it the­or­em of a pos­it­ive har­mon­ic func­tion, a the­or­em based on the de­riv­a­tion of a meas­ure with re­spect to Le­besgue meas­ure on the bound­ing circle, should not be gen­er­al­ized to cov­er the ra­tio of two pos­it­ive har­mon­ic func­tions, and I proved this gen­er­al­iz­a­tion. I had already noted that the quo­tient of a pos­it­ive su­per­har­mon­ic func­tion di­vided by a pos­it­ive har­mon­ic func­tion sat­is­fied an av­er­age in­equal­ity like that of a su­per­har­mon­ic func­tion, with an av­er­aging meas­ure de­pend­ing on the de­nom­in­at­or func­tion. The cor­res­pond­ing ideas in prob­ab­il­ity the­ory led to quo­tients of pos­it­ive mar­tin­gales and to what are now called \( h \)-path pro­cesses in Markov pro­cess the­ory.

Hunt’s great pa­pers on the po­ten­tial the­ory gen­er­ated by Markov trans­ition func­tions re­vo­lu­tion­ized po­ten­tial the­ory. He and I had an amus­ing in­ter­play. I thought that his pa­pers were dif­fi­cult to read and de­cided to make them un­der­stand­able to a wider audi­ence, in­clud­ing me, by ap­ply­ing his ap­proach in a simple con­text, po­ten­tial the­ory on a count­able set, based on a Markov trans­ition func­tion (a mat­rix in this con­text). He then trumped my pa­per in a pa­per ex­plain­ing and go­ing bey­ond mine. The se­quence stopped there.

Hunt’s ap­proach to po­ten­tial the­ory had the un­for­tu­nate ef­fect that many math­em­aticians thought of po­ten­tial the­ory as a subchapter of prob­ab­il­ity the­ory and that po­ten­tial the­or­et­ic no­tions are best defined prob­ab­il­ist­ic­ally. When I wrote my po­ten­tial the­ory book I tried to coun­ter­act this ap­proach by deal­ing with clas­sic­al po­ten­tial the­ory first and prob­ab­il­ity — mostly mar­tin­gale the­ory — in later chapters. The res­ult was that even I was sur­prised to find that clas­sic­al po­ten­tial the­ory and mar­tin­gale the­ory were so linked that what at first sight were purely prob­ab­il­ist­ic no­tions, such as the mar­tin­gale cross­ing in­equal­it­ies, were coun­ter­parts of non­prob­ab­il­ist­ic po­ten­tial the­ory, and that proofs in the lat­ter the­ory gave proofs in the former by the simple device of in­ter­pret­ing, for ex­ample, \( h \) as a har­mon­ic func­tion in the one study and as a mar­tin­gale in the oth­er. The re­duc­tion op­er­a­tion on \( h \) is val­id in both con­texts and is a key link between them. I feel there must be a the­ory of which both the­or­ies are spe­cial cases but have had no suc­cess in de­vis­ing one.

I was in close con­tact with Brelot in my po­ten­tial the­ory work and learned much from him. When he told me he would like to write a book on mod­ern po­ten­tial the­ory but could not be­cause he did not know the ne­ces­sary prob­ab­il­ity the­ory, I was con­fid­ent that he would not want to do the bor­ing work of writ­ing such a book and I thought it would be safe to tease him by of­fer­ing to write the prob­ab­il­ity part of his book if he wrote the non­prob­ab­il­ity part. My psy­cho­lo­gic­al ana­lys­is was right about him but de­fect­ive about me, since my own book covered both parts.

The Illinois hike

Snell: While you have re­tired from the Math­em­at­ics De­part­ment at Illinois, you have not re­tired from the Illinois Sat­urday Hikes. We should close with some re­marks about this in­sti­tu­tion.

Doob: The Sat­urday Hike was star­ted in 1909 by a clas­sics pro­fess­or. Each Sat­urday the group drives to woods along a river. For many years some of the hikers hiked along the river and the oth­ers found an open area and played a prim­it­ive ver­sion of soft­ball base­ball, but as the years pro­gressed the num­bers dwindled and fi­nally there were too few for base­ball. On hot sum­mer days there is swim­ming in a river or pond. A “sit­ting log” as hiker base is found in the woods. In the even­ing a fire is built near the log, as large as needed for cook­ing and warmth, food is cooked and the prob­lems of the uni­versity, Cham­paign-Urb­ana and the world are solved. Dis­agree­ment on a fact is settled by a Pie for the Hikers bet; the loser brings a pie after the fact has been re­searched.

The hikers stand around the fire or sit on the sit­ting log. On cold winter days the fires are large, and news­pa­per is used over sit­ter knees to pro­tect them from the heat. The tra­di­tion is that the fire should be placed to make the smoke go in­to sit­ter faces. Snow or light rain is mostly dis­sip­ated by the fire; if there is heavy rain, the fire is built un­der a non­por­ous bridge.

The hike is char­ac­ter­ized by glor­i­ous ir­re­spons­ib­il­ity in ac­tion and con­ver­sa­tion and by heavy eat­ing. “Hikers de­light” is a renowned spe­cialty: onions and hot pep­pers are fried in ba­con fat; when the onions are done, the fat is poured in­to the fire, cheese is ad­ded and the fry­ing con­tin­ues un­til the cheese has melted. “We played soft­ball in cow-pas­tures, fried our steak, stood on the fire and rocked the night with corny song” (Rich­mond Lat­timore in the New Re­pub­lic, Novem­ber 13, 1961, in hon­or of the Sat­urday Hike founder; the singing stopped in the 1940s, when the singing lead­ers died).

I entered the group in 1939 and went reg­u­larly every Sat­urday. At that time as many as 30 came out, but now there are usu­ally at most 10. The Sat­urday Hike is a treas­ured tra­di­tion and mem­bers drive to Urb­ana for it from as far away as Purdue, 90 miles away.

Snell: More im­press­ive to me is your go­ing on those winter hikes at age 87. I’ll see you next year at my an­nu­al hike vis­it, but I think I’ll make it Novem­ber this time rather than Janu­ary!

Works

[1]J. L. Doob: Stochast­ic pro­cesses. John Wiley & Sons (New York), 1953. MR 0058896 Zbl 0053.​26802

[2] J. L. Doob: Clas­sic­al po­ten­tial the­ory and its prob­ab­il­ist­ic coun­ter­part. Grundlehren der Math­em­at­ischen Wis­senschaften 262. Spring­er (New York), 1984. Re­prin­ted in 2001. MR 731258 Zbl 0549.​31001 book

[3] J. L. Doob: Meas­ure the­ory. Gradu­ate Texts in Math­em­at­ics 143. Spring­er (New York), 1994. MR 1253752 Zbl 0791.​28001 book