Celebratio Mathematica

David H. Blackwell

Statistics  ·  UC Berkeley

A Tribute to David Blackwell

by Peter Bickel

I first met Dav­id Black­well when I took his course on in­form­a­tion the­ory dur­ing my first year as a doc­tor­al stu­dent. Dav­id had chosen as a text Jack Wolfow­itz’s In­form­a­tion The­ory for Math­em­aticians, which, as the title sug­gests, was some­what dry. Dav­id made the sub­ject come to life. His style was well es­tab­lished. Strip the prob­lem of all ex­cess bag­gage and present a solu­tion in full el­eg­ance. The pa­pers that I read of his, such as those on the Black­well re­new­al the­or­em and on Bayesian se­quen­tial ana­lys­is/dy­nam­ic pro­gram­ming, all have that char­ac­ter. I didn’t go on in in­form­a­tion the­ory, but I didn’t fore­close it. My next mem­or­able en­counter with Dav­id, or rather the strength of his drinks, was at a party he and Ann gave for the de­part­ment. When I de­clined his fa­vor­ite mar­tini he offered Brandy Al­ex­an­ders. I took two and have trouble re­mem­ber­ing what happened next!

And then I had the great pleas­ure and good for­tune of col­lab­or­at­ing with Dav­id. I was teach­ing a de­cision the­ory course in 1966, re­ly­ing heav­ily on Dav­id and Abe Gir­shick’s book, The­ory of Games and Stat­ist­ic­al De­cisions. I came across a simple, beau­ti­ful res­ult of theirs that, in stat­ist­ic­al lan­guage, can be ex­pressed as: If a Bayes es­tim­at­or is also un­biased, then it equals the para­met­er that it is es­tim­at­ing with prob­ab­il­ity one. In prob­ab­il­ist­ic lan­guage this says that if a pair of ran­dom vari­ables form both a for­ward and a back­ward mar­tin­gale, then they are a.s. equal.

Un­biased­ness and Bayes were here spe­cified in terms of squared er­ror loss. I asked the ques­tion “What hap­pens for \( L_p \) loss for which a suit­able no­tion of un­biased­ness had been in­tro­duced by Lehmann?” I made a pre­lim­in­ary cal­cu­la­tion for \( p \) between 1 and 2 that sug­ges­ted that the ana­logue of the Black­well–Gir­shick res­ult held. I nat­ur­ally then turned to Dav­id for con­firm­a­tion. We had es­sen­tially an hour’s con­ver­sa­tion in which he elu­cid­ated the whole story by giv­ing an ar­gu­ment for what happened when \( p \) equals 1, and, in fact, the res­ult failed. He then sent me off to write it up. The pa­per ap­peared in 1967 in the An­nals of Math­em­at­ic­al Stat­ist­ics.

It is still a pa­per I en­joy read­ing. It led to an in­ter­est­ing fol­low-up. In a 1988 Amer­ic­an Stat­ist­i­cian pa­per, Colin Mal­lows and I stud­ied ex­haust­ively what hap­pens when the un­der­ly­ing pri­or is im­prop­er, which led to some sur­prises. Dav­id was a Bayesian be­long­ing, I think, to the minor­ity who be­lieved that ax­ioms of ra­tion­al be­ha­vi­or in­ev­it­ably lead to a (sub­ject­ive) pri­or. He was es­sen­tially alone in that point of view in the de­part­ment but nev­er let his philo­soph­ic­al views in­ter­fere with his most cor­di­al per­son­al re­la­tions.

Sadly, our col­lab­or­a­tion was the last of my ma­jor sci­entif­ic con­tacts with Dav­id. We were al­ways on very friendly terms, but he would leave the of­fice at 10 AM, which was my usu­al time of ar­rival.

After we both re­tired, we would meet ir­reg­u­larly for lunch at an In­di­an res­taur­ant, and I got a clear­er idea of the dif­fi­culties as well as tri­umphs of his life. Des­pite hav­ing grown up in the se­greg­ated South, Dav­id al­ways viewed the world with op­tim­ism. As long as he could do math­em­at­ics, “un­der­stand things”, rather than “do­ing re­search”, as he said in re­peated in­ter­views, he was happy.

It was my for­tune to have known him as a math­em­atician and as a per­son. He shone on both fronts.