#### by Peter Bickel

I first met David Blackwell when I took his course
on information theory during my first year as
a doctoral student. David had chosen as a text
Jack Wolfowitz’s
*Information Theory for Mathematicians*,
which, as the title suggests, was
somewhat dry. David made the subject come
to life. His style was well established. Strip the
problem of all excess baggage and present a solution
in full elegance. The papers that I read of
his, such as those on the Blackwell renewal theorem
and on Bayesian sequential analysis/dynamic
programming, all have that character. I didn’t go
on in information theory, but I didn’t foreclose
it. My next memorable encounter with David, or
rather the strength of his drinks, was at a party he
and Ann gave for the department. When I declined
his favorite martini he offered Brandy Alexanders.
I took two and have trouble remembering what
happened next!

And then I had the great pleasure and good
fortune of collaborating with David. I was teaching
a decision theory course in 1966, relying heavily
on David and
Abe Girshick’s
book, *Theory of
Games and Statistical Decisions*. I came across a
simple, beautiful result of theirs that, in statistical
language, can be expressed as: If a Bayes estimator
is also unbiased, then it equals the parameter that it
is estimating with probability one. In probabilistic
language this says that if a pair of random variables
form both a forward and a backward martingale,
then they are a.s. equal.

Unbiasedness and Bayes were here specified in
terms of squared error loss. I asked the question
“What happens for __\( L_p \)__ loss for which a suitable
notion of unbiasedness had been introduced by
Lehmann?” I made a preliminary calculation for __\( p \)__
between 1 and 2 that suggested that the analogue
of the Blackwell–Girshick result held. I naturally
then turned to David for confirmation. We had
essentially an hour’s conversation in which he
elucidated the whole story by giving an argument
for what happened when __\( p \)__ equals 1, and, in fact,
the result failed. He then sent me off to write it
up. The paper appeared in 1967 in the *Annals of
Mathematical Statistics*.

It is still a paper I enjoy reading. It led to
an interesting follow-up. In a 1988 *American
Statistician* paper,
Colin Mallows
and I studied
exhaustively what happens when the underlying
prior is improper, which led to some surprises.
David was a Bayesian belonging, I think, to the
minority who believed that axioms of rational
behavior inevitably lead to a (subjective) prior. He
was essentially alone in that point of view in the
department but never let his philosophical views
interfere with his most cordial personal relations.

Sadly, our collaboration was the last of my major scientific contacts with David. We were always on very friendly terms, but he would leave the office at 10 AM, which was my usual time of arrival.

After we both retired, we would meet irregularly for lunch at an Indian restaurant, and I got a clearer idea of the difficulties as well as triumphs of his life. Despite having grown up in the segregated South, David always viewed the world with optimism. As long as he could do mathematics, “understand things”, rather than “doing research”, as he said in repeated interviews, he was happy.

It was my fortune to have known him as a mathematician and as a person. He shone on both fronts.