# Celebratio Mathematica

## Murray Rosenblatt

### A conversation with Murray Rosenblatt

#### by David Brillinger and Richard Davis

On an ex­quis­ite March day in 2006, Dav­id Brillinger and Richard Dav­is sat down with Mur­ray and Ady Rosen­blatt at their home in La Jolla, Cali­for­nia for an en­joy­able day of re­min­is­cences and con­ver­sa­tion. Our ment­or, Mur­ray Rosen­blatt, was born on Septem­ber 7, 1926 in New York City and at­ten­ded City Col­lege of New York be­fore en­ter­ing gradu­ate school at Cor­nell Uni­versity in 1946. After com­plet­ing his Ph.D. in 1949 un­der the dir­ec­tion of the renowned prob­ab­il­ist Mark Kac, the Rosen­blatts moved to Chica­go where Mur­ray be­came an in­struct­or/as­sist­ant pro­fess­or in the Com­mit­tee of Stat­ist­ics at the Uni­versity of Chica­go. Mur­ray’s aca­dem­ic ca­reer then took him to the Uni­versity of In­di­ana and Brown Uni­versity be­fore his join­ing the Uni­versity of Cali­for­nia at San Diego in 1964. Along the way, Mur­ray es­tab­lished him­self as one of the most cel­eb­rated and lead­ing fig­ures in prob­ab­il­ity and stat­ist­ics with par­tic­u­lar em­phas­is on time series and Markov pro­cesses. In ad­di­tion to be­ing a fel­low of the In­sti­tute of Math­em­at­ic­al Stat­ist­ics and Amer­ic­an As­so­ci­ation for the Ad­vance­ment of Sci­ence, he was a Gug­gen­heim fel­low (1965–1966, 1971–1972) and was elec­ted to the Na­tion­al Academy of Sci­ences in 1984. Among his many con­tri­bu­tions, Mur­ray con­duc­ted sem­in­al work on dens­ity es­tim­a­tion, cent­ral lim­it the­or­ems un­der strong mix­ing, spec­tral do­main meth­ods and long memory pro­cesses. Mur­ray and Ady Rosen­blatt were mar­ried in 1949 and have two chil­dren, Karin and Daniel.

Richard: Ady, how did you and Mur­ray meet?

Ady: We met in the Ford­ham Road lib­rary. He came over and we talked and went for a walk in the pour­ing rain. We walked over to his girl­friend’s house. She wasn’t home, so we left a mes­sage with her sis­ter or whatever. The next day his girl­friend said to me “Oh, that was in­ter­est­ing you dropped over the same day that Mur­ray did.” And I said “Oh yeah, we were to­geth­er.” Things went down after that. I guess we were mar­ried when we were around 23 in 1949. I was older, I was 23 and he was still 22. We were in Ithaca. I waited un­til he fin­ished gradu­ate school.

Richard: I guess he was a little slow! Were you at­tend­ing Cor­nell at the same time?

Ady: No, I had a job teach­ing swim­ming in NYC. We saw each oth­er more or less dur­ing the war and then he went up to Cor­nell and I would go up and vis­it him every now and then.

Richard: So you met in high school.

Ady: No, we met after high school.

Richard: But you gradu­ated when you were 16?

Mur­ray: Prob­ably.

Ady: Maybe I was 17, I don’t know be­cause I was born in April. I don’t re­mem­ber when I gradu­ated. I re­mem­ber when I got mar­ried!

Dav­id: Do you re­mem­ber the best man and maid of hon­or at your wed­ding?

Ady: I think it was Bert and Shir­ley Yood.

Mur­ray: Bert Yood was a spe­cial­ist on Banach al­geb­ras. I re­mem­ber tak­ing a course from him on Banach al­geb­ras as a gradu­ate stu­dent.

Dav­id: Were your par­ents and his par­ents im­mig­rants?

Ady: No, his par­ents were im­mig­rants. His fath­er was from Rus­sia (now Ukraine) and his moth­er was from Po­land. They met in the States. After we walked home from Mur­ray’s girl­friend’s house in the rain, he caught a cold. His moth­er was not happy with me when she met me be­cause I gave him that.

Richard: So you didn’t make a good first im­pres­sion!

Ady: My fath­er’s par­ents came from Po­land. He and his young­er sis­ter were born in Amer­ica. The rest of his sib­lings were born in Po­land. And my moth­er’s fam­ily came from Hun­gary and Aus­tria. My grand­fath­er was Hun­gari­an. They were all Jew­ish.

Ady: Yes, Mur­ray had one broth­er Dav­id, who was al­ways a ma­jor in­flu­ence on him.

Mur­ray: Oh, yes. Al­though I didn’t al­ways fol­low his ad­vice, I found it use­ful to listen to.

#### Attending CCNY and Cornell

Richard: Mur­ray, back at CCNY, one of your pro­fess­ors, Emil Post, seemed to have made quite an im­pact on you.

Mur­ray: Ac­tu­ally, Post was a re­mark­able char­ac­ter. I think he is, at least on the Amer­ic­an scene, maybe in­ter­na­tion­ally, one of the great fig­ures in math­em­at­ic­al lo­gic. Be­cause he was man­ic-de­press­ive he used to get in­to these man­ic states oc­ca­sion­ally and had to be in­sti­tu­tion­al­ized. Also he was one-armed. He ac­tu­ally did some real ana­lys­is too. I took a class in real ana­lys­is with him. He was fol­low­ing some book with an in­cred­ible num­ber of er­rors, which he cor­rec­ted. And he was sort of a per­fec­tion­ist. There’s an amus­ing story. Mar­tin Dav­is, a fel­low stu­dent, who’s a well-known fig­ure in math­em­at­ic­al lo­gic today who has done some re­mark­able work, was also in the class. Post used to as­sign prob­lems and have people come up to the board. I guess at one point he asked me and I guess I was start­ing in a par­tic­u­lar dir­ec­tion and he was about to cut me off when Dav­is said why don’t you let him go on, which you might say saved me at that point. But I took a read­ing course with him later on, this per­son who was so form­al in the class, turned out to be a very pleas­ant hu­man. He used a book, ac­tu­ally a very lovely book by a French math­em­atician by the name of C. de la Vallée Poussin. The book [e10] was his Intégrales de Le­besgue, Fonc­tions d’En­semble, Classes de Baire pub­lished in 1916. I may have a copy of it. It was ac­tu­ally a very el­eg­ant book and it was a pleas­ant read­ing course to take with him, and have oc­ca­sion­ally some in­ter­changes with him. I have rather pleas­ant memor­ies of that.

Dav­id: Was he an Amer­ic­an?

Mur­ray: Oh yes, he was Amer­ic­an. Prob­ably at that time he was the most dis­tin­guished mem­ber of the CCNY math­em­at­ics fac­ulty. Un­for­tu­nately, at one point I came in­to class and he was en­thu­si­ast­ic. That was a sign that he was en­ter­ing a man­ic phase. They took him off to treat­ment I as­sume.

Richard: What oth­er courses did you take at CCNY? I was just won­der­ing how you be­came in­ter­ested in prob­ab­il­ity and stat­ist­ics.

Mur­ray: I re­mem­ber tak­ing courses at City Col­lege in math­em­at­ic­al phys­ics and ther­mo­dy­nam­ics. I prob­ably didn’t have any in­terest in prob­ab­il­ity and stat­ist­ics then. I went to Cor­nell as a gradu­ate stu­dent and Wil­li­am Feller and Mark Kac were on the fac­ulty. I took most of the courses in prob­ab­il­ity the­ory with Feller. I wrote my thes­is with Mark Kac as my ad­visor. Feller was a re­mark­able math­em­atician but had very strong but hu­mor­ous opin­ions and great en­thu­si­asm. A fel­low stu­dent, Samuel Gold­berg, and I used to take notes on Feller’s lec­tures. Feller thanked him in the in­tro­duc­tion in his well-known book on prob­ab­il­ity the­ory, the lovely book on in­tro­duct­ory prob­ab­il­ity the­ory. At the be­gin­ning of a dis­cus­sion of the 3 series the­or­em, Feller said isn’t it ob­vi­ous? I guess some of us had enough self-pre­ser­va­tion or ego to say no, we don’t see it’s ob­vi­ous. It took two to three lec­tures to go through the full de­vel­op­ment.

Richard: You went to Cor­nell with the idea of study­ing math­em­at­ics with no par­tic­u­lar spe­cialty in mind?

Mur­ray: Yes, it was math­em­at­ics. I guess there were two op­por­tun­it­ies at that time, either Brown or Cor­nell. For whatever reas­ons, I chose Cor­nell.

Ady: Didn’t they give you a bet­ter salary?

Mur­ray: I’m sure they did. There were young­er people with a good deal of in­terest in prob­ab­il­ity the­ory too. Gil­bert Hunt was there and so was Kai Lai Chung. And in one year there were quite a num­ber of vis­it­ors, Doob, Don­sker, Darling and vari­ous of Doob’s stu­dents, I guess. There must have been Laurie Snell and John Kin­ney. So there was a good deal of activ­ity in that area. I guess I prob­ably took a course in math­em­at­ic­al stat­ist­ics and I sup­pose it was giv­en by Feller. It was good as a stu­dent to be work­ing on a thes­is un­der Kac — you knew with Kac, you could come in and talk with him if you wanted to and you would get good ad­vice. He didn’t have strong opin­ions about this be­ing the dir­ec­tion to go in­to while Feller did have such propensit­ies. Mario Jun­cosa was a gradu­ate stu­dent with me there. We still keep in con­tact with him; he has been at Rand for many years and Jun­cosa was a stu­dent of Feller’s. He com­pleted his thes­is there.

Dav­id: Ady, did you get to know Marc Kac? What did you think of him?

Ady: I thought he was a lovely per­son. He was very, very kind and very nice and help­ful. His wife was Kitty. We saw them through the years ac­tu­ally.

Mur­ray: He moved to Cali­for­nia, to USC.

Ady: He was at Rock­e­feller be­fore. We used to see him there at USC. Shortly be­fore he died we used to go up to the theat­er with them. He was really nice.

Richard: Were gradu­ate stu­dents sup­por­ted as teach­ing as­sist­ants and such?

Mur­ray: The first year I had an Erastus Brooks fel­low­ship at Cor­nell. The second year I taught classes. The Of­fice of Nav­al Re­search came through with sup­port in the last year. So I was sup­por­ted, ini­tially on a fel­low­ship, then what amoun­ted to as­sist­ant­ships.

Ady: Didn’t ONR sup­port you with grants all the way through your ca­reer?

Mur­ray: A good deal of the time, but I also had par­tial sup­port from NSF. Cer­tainly, the Of­fice of Nav­al Re­search sup­por­ted me.

Richard: I sup­pose I was also sup­por­ted at some point on Mur­ray’s ONR grants dur­ing my gradu­ate stud­ies.

Mur­ray: I think a good many of the stu­dents I had were sup­por­ted by NSF too but mainly the Of­fice of Nav­al Re­search.

Richard: At Cor­nell did they sup­port you dir­ectly or was it funneled through a fac­ulty mem­ber?

Mur­ray: It must have funneled through a fac­ulty mem­ber, maybe Mark Kac.

Dav­id: I pic­ture you as in­ter­ested in ap­plic­a­tions in the phys­ic­al sci­ences. Did that start when you went to Brown or were you do­ing that at Cor­nell also?

Mur­ray: Just in terms of the back­ground at Cor­nell, Kac al­ways had in­terest in those parts of phys­ics re­lated to stat­ist­ic­al mech­an­ics so there’s def­in­itely in the back­ground an in­terest in ap­plic­a­tions. A good deal of prob­ab­il­ity the­ory ini­tially was mo­tiv­ated by ap­plic­a­tions of sorts, maybe ini­tially to gambling sys­tems but oth­er areas too. I sup­pose some of it may re­late to un­der­gradu­ate courses at CCNY in math­em­at­ic­al phys­ics and ther­mo­dy­nam­ics. Well, you know, my thes­is…what was the title of my thes­is?

Richard: I have it here, Mur­ray, in case you can’t re­mem­ber.

Mur­ray: Maybe something on Wien­er func­tion­als.

Richard: “On dis­tri­bu­tions of cer­tain Wien­er func­tion­als.”

Mur­ray: Right, and that was an at­tempt to mildly gen­er­al­ize some res­ults of Kac; You know, this is really re­lated to what was later re­ferred to as the Kac–Feyn­man for­mula. A re­vised ver­sion of it is pub­lished in a pa­per [1] called “On a class of Markov pro­cesses” which ap­peared in the Trans­ac­tions of the Amer­ic­an Math­em­at­ic­al So­ci­ety. What it does is to con­sider the in­teg­ral of some func­tion of both time and Browni­an mo­tion. What one es­sen­tially does is look at the Laplace trans­form of the fun­da­ment­al con­di­tion­al dis­tri­bu­tion and re­late that to a solu­tion of an as­so­ci­ated para­bol­ic dif­fer­en­tial equa­tion. So at least form­ally, that’s re­lated to the for­mula.

Mur­ray: Ac­tu­ally, I wrote a mas­ter’s thes­is and it was nev­er pub­lished. My thes­is was on defin­i­tions of ab­so­lute con­tinu­ity for func­tions of two vari­ables. As I re­mem­ber, my thes­is com­mit­tee in­cluded two mem­bers. One of the mem­bers, I’m try­ing to re­mem­ber, in his own day, was a well-known math­em­atician called Wal­lie Hur­witz.

Ady: Oh, right. He was good at the stock mar­ket, right?

Mur­ray: Oh yes, he knew how to in­vest in the stock mar­ket and he left quite a bit to Cor­nell Uni­versity, to the Math­em­at­ics De­part­ment. But he was very good. My doc­tor­al thes­is com­mit­tee in­cluded three people. I re­mem­ber Mark Kac of course, who was my thes­is ad­visor and Mor­ris­on.

Dav­id: Which Mor­ris­on?

Mur­ray: The Philip Mor­ris­on who re­tired even­tu­ally from MIT. He is a well-known name in phys­ics. I’m not sure I still have it [my thes­is]. It may be at school or I might have lost it. It’s pos­sible Harry Pol­lard was a part of that com­mit­tee too, but I don’t know. If they were part of that com­mit­tee they may have helped me through be­cause I’m sure some of my an­swers to the phys­ic­ally ori­ented ques­tions of Mor­ris­on may not have been that ad­equate. Ac­tu­ally, I had what was called a minor in phys­ics which con­sisted of a series of courses in quantum mech­an­ics taught by Hans Bethe.

Dav­id: Wow! And Feyn­man was there too.

Mur­ray: Oh yes. Feyn­man was a hil­ari­ous char­ac­ter.

Richard: What was that like? I guess there was one sym­posi­um with him and Feller go­ing at it.

Mur­ray: There was a lec­ture. No, not a lec­ture, but an in­ter­change with Feyn­man, Kac and Feller. Feyn­man had such agil­ity, in terms of in­stant­an­eous and spon­tan­eous re­sponse. He put both of them to shame. The one per­son you can com­pare him to, and I think he was even bet­ter, was the prime con­ser­vat­ive, Milton Fried­man. I thought Feyn­man was even bet­ter than Fried­man. Fried­man was prob­ably the most ar­tic­u­late de­fend­er of the con­ser­vat­ive per­spect­ive in eco­nom­ics.

Dav­id: I guess in World War II he was a part of that stat­ist­ics group at Columbia.

Mur­ray: I think Ho­telling star­ted out as the head of that group. It ended up with Al­len Wal­lis as head. Prob­ably due to Wal­lis’ abil­it­ies ad­min­is­trat­ively, I don’t know. So from that point of view, I guess you might say my in­terest in phys­ics par­tially comes from my gradu­ate stu­dent days since I was ex­posed to some of it then.

Richard: You fin­ished in three years; that seems in­cred­ibly fast to me, es­pe­cially if one in­cludes a Mas­ter’s thes­is on top of that.

Mur­ray: Kac must have been the reas­on. I am etern­ally in­debted to Kac as the per­son who served as a thes­is ad­visor and may have helped oc­ca­sion­ally with sug­ges­tions, but sort of left you alone without say­ing you’ve got to do this or that so forth and so on. He let you to go your own way.

Richard: This sounds fa­mil­i­ar ac­tu­ally.

Mur­ray: In what sense? In your case, I didn’t have to give any dir­ec­tion any­ways. What did I do? I sug­ges­ted an area, you got in­to it and worked on it.

Mur­ray: Well, it seemed to me if a stu­dent is bright enough to make his own way, why do you have to im­pose on him?

Dav­id: The range of their thes­is top­ics is very broad.

Mur­ray: Well, one stu­dent I got in­to a field was Richard Brad­ley and ob­vi­ously he con­tin­ued. He has be­come the great ex­pert on strong mix­ing. Look at his mar­velous three volumes on strong mix­ing con­di­tions [e11].

Richard: It al­ways seems like you had a hid­den motive in mind re­gard­ing the top­ics that we worked on. When I was a gradu­ate stu­dent, Rick Brad­ley was work­ing on strong mix­ing, Ed Mack was work­ing on dens­ity es­tim­a­tion and I was work­ing on ex­tremes un­der mix­ing con­di­tions and it seemed like….

Mur­ray: You were also look­ing at some as­pects of Markov pro­cesses, re­l­at­ive to ex­tremes, right?

Richard: Yes, but the main top­ic was ex­tremes of sta­tion­ary pro­cesses and it seemed that you had some oth­er ap­plic­a­tion in mind. There was a con­nec­tion between these com­pon­ents that you saw but was in­vis­ible to us.

Mur­ray: I don’t think it was any­thing that con­scious. You know what I was do­ing. If a stu­dent wanted a thes­is top­ic and I hadn’t thought of one, it seemed reas­on­able or in­ter­est­ing to sug­gest things that I had mar­gin­al ac­quaint­ance with that soun­ded in­ter­est­ing and look if it’s pos­sible to work on.

Richard: Later, I could see the con­nec­tion with dens­ity es­tim­a­tion and the res­ults you had with Bick­el on max­im­um de­vi­ation of dens­ity es­tim­ates.

Mur­ray: The dens­ity es­tim­a­tion ac­tu­ally comes out of the spec­tral es­tim­a­tion in a dir­ect man­ner. It is sort of silly be­cause it’s ob­vi­ous. It’s an ex­ample how con­trary to the usu­al no­tion it is to do things in a sim­pler situ­ation and then go on to great­er com­plex­ity. I mean, what happened in the dens­ity func­tion es­tim­a­tion was that I had cer­tain res­ults on spec­tral es­tim­a­tion. I saw the pa­per of Fix and Hodges [e9], and in my pa­per on dens­ity es­tim­a­tion that pa­per is re­ferred to. The no­tion was good. They pro­posed some dens­ity es­tim­ate, and the no­tion was to look at cer­tain res­ults on es­tim­a­tion of the dens­ity even simple ones; why shouldn’t there be sim­il­ar res­ults for dens­ity es­tim­ates com­par­able to those for spec­tral es­tim­ates? It’s really a hil­ari­ous af­fair be­cause what is a dens­ity es­tim­ate but a smooth­ing of a his­to­gram, right, and it’s an ex­ample of a more com­plex situ­ation lead­ing back to a sim­pler con­text. In fact, when you go back and take this stuff ser­i­ously about the Ein­stein pa­per that even goes back 30 years be­fore. So it’s an ex­ample of how things don’t al­ways go the way you think ra­tion­ally they ought to.

#### The Chicago, Indiana and Brown years

Dav­id: Did you con­tin­ue to live in Ithaca after you were mar­ried?

Ady: We were there for a year and went to Chica­go. There were no aca­dem­ic jobs and Mur­ray was on his way to ac­cept­ing a gov­ern­ment job; it seemed to be the only thing open at the time. We ended up at our par­ent’s house and then he gets a call there from Chica­go. “Would he come?”

Mur­ray: Marc Kac, I guess dur­ing one of his travels, must have got­ten a con­tact there. I stayed on at Cor­nell for one year as a postdoc po­s­i­tion that was fun­ded by the Of­fice of Nav­al Re­search, I think. At that time there was this stat­ist­ic­al group at the Uni­versity of Chica­go. It wasn’t called a de­part­ment, but a com­mit­tee of stat­ist­ics. Al­len Wal­lis, Jim­mie Sav­age and Charles Stein were mem­bers of it. Ac­tu­ally Stein stayed with us for a few months when he first came to Chica­go and then he left later on to go to Stan­ford. That must have been the time after he left Berke­ley be­cause of the loy­alty oath is­sue.

Dav­id: It must have been ma­gic.

Mur­ray: After a while, I wanted to leave ac­tu­ally. There were some dif­fi­culties. But, that’s when I went to In­di­ana. I guess Ju­li­us Blum was at In­di­ana at that time. In fact, cer­tain as­pects of Chica­go were very good. Well, there were a num­ber of people that vis­ited. It was a nice as­pect of the place, and Gren­ander vis­ited, and that was the time when we star­ted up do­ing joint work on the book (see [7]). I went to Sweden for two-thirds of a year in 1953. I guess people like Henry Daniels, Mos­teller and oth­er such people vis­ited Chica­go at that time. I guess there were dif­fi­culties in get­ting the book with Gren­ander pub­lished. It was ini­tially sup­posed to be pub­lished as part of the Uni­versity of Chica­go series but I guess there’s the amus­ing as­pect of how do you get things pub­lished and what not and what are the dif­fi­culties. They had a very large ed­it­or­i­al board which I’m sure had some very good people but in the re­views of the book, some people liked it and some didn’t like it. From my point of view, maybe not Gren­ander’s, I thought one had mu­tu­ally con­tra­dict­ory re­com­mend­a­tions. Even­tu­ally we had it pub­lished by a com­mer­cial pub­lish­er.

Richard: Wiley pub­lished the book in the end, right?

Mur­ray: Yes.

Richard: But this must have res­ul­ted in a wider dis­sem­in­a­tion.

Mur­ray: I didn’t do it be­cause of that. I did it be­cause I have to ad­mit I did not have any more pa­tience with the Chica­go series.

Richard: In the end it might have been a bet­ter situ­ation all the way around.

Mur­ray: Oh, I think look­ing back, you’re right.

Dav­id: You edu­cated me about what it is like con­cern­ing ref­er­ee­ing and what it’s like when you make a mis­take in a pa­per. I was all up­set as I had found a mis­take in a pa­per of mine and you said, “Oh that’s your first time?” or some such.

Mur­ray: Oh, you were up­set about a mis­take.

Dav­id: Yes. I had found a mis­take in a pa­per that had ap­peared. You calmed me down.

Mur­ray: I have had enough mis­takes.

Dav­id: You need someone seni­or to tell you that when you’re young.

Richard: It’s not the end of the world.

Mur­ray: My feel­ing is that just about every­one run­ning around on earth has found some er­rors some­where and hope­fully gets it cor­rec­ted in time or cor­rec­ted even­tu­ally.

While at Chica­go, I surely be­nefited from con­tact with Ba­hadur. There’s this little pa­per [5] on dens­ity es­tim­a­tion with a little bit on non­para­met­ric as­pects at the very be­gin­ning of the pa­per. I cer­tainly be­nefited from dis­cus­sions with Ba­hadur. I re­mem­ber a din­ner where we were in­vited to by the Ba­hadurs. I guess we wer­en’t use to spi­ci­ness in In­di­an foods. Ini­tially the shock was my sense of taste. It was over­whelmed. Only gradu­ally did I be­gin to taste something, but it was very good food.

Mur­ray: While at the Uni­versity of Chica­go, I wrote a min­is­cule pa­per on eco­nom­ics which ac­tu­ally got pub­lished.

Dav­id: I guess I saw the pa­per in Eco­no­met­rica, en­titled “An in­vent­ory prob­lem” [3], and thought it was op­er­a­tions re­search.

Ady: One thing I’ll say about Mur­ray: I know a lot of people make a lot of ex­cite­ment and fuss when they write pa­pers, but when Mur­ray is do­ing his work he’s quiet.

Richard: There’s no jump­ing up and down?

Ady: No, he doesn’t get angry and jump up and down.

Richard: He doesn’t high-five you when he fin­ishes a pa­per?

Mur­ray: There were some very nice as­pects about my times at Chica­go, par­tic­u­larly, as I said, with the vis­it­ors. One of the vis­it­ors I had a nice in­ter­change with was Joe Hodges. We ac­tu­ally wrote two pa­pers. One was a joint pa­per with Brown­lee, which was on the up-and-down meth­od. And the oth­er was rather a cute pa­per on ran­dom walks with Joe, who’s really a very bright guy. Did you have any con­tact with him?

Dav­id: Oh yes, I did. I re­mem­ber you say­ing very early on, when I got to know you, how im­pressed you were with Joe Hodges. You thought he was a su­per ap­plied stat­ist­i­cian.

Mur­ray: Speak­ing of Joe Hodges, there was a pa­per, [e9], on dens­ity es­tim­a­tion un­for­tu­nately nev­er pub­lished, at least ini­tially.

Ady: Didn’t you write a pa­per with the psy­cho­lo­gist, Cle­tus Burke?

Mur­ray: We got to meet in In­di­ana. I worked with him on func­tions of Markov chains. They seemed to have some in­terest in eco­nom­ics and psy­cho­logy, ques­tions on col­lapsing of states and wheth­er Markovi­an prop­er­ties are re­tained or not.

Dav­id: I guess it was a hid­den Markov pro­cess, which has been the rage for many years now. You col­lapse some states and there’s a Markov in the back­ground and you try to learn about it. One can try fit­ting those things any­where you can ima­gine.

Mur­ray: Right, I guess the claim is that it’s use­ful. Cle­tus Burke was a psy­cho­lo­gist at In­di­ana at the time.

Ady: Did you do any pa­pers with him?

Mur­ray: Yes, there was a pa­per [8] I wrote with him that ap­peared in the An­nals of Math­em­at­ic­al Stat­ist­ics in 1958.

Richard: Mur­ray, what are you look­ing at over there?

Mur­ray: It’s just a col­lec­tion of my pa­pers that I put in bound form so I could re­mem­ber what I had done. The title of the pa­per was “A Markovi­an func­tion of a Markov chain.” Oh, I wrote 2 pa­pers with Burke. An­oth­er pa­per was called “Con­sol­id­a­tion of prob­ab­il­ity matrices” [11].

Dav­id: So he’s a psy­cho­lo­gist?

Mur­ray: He was a psy­cho­lo­gist. He was one of these in­ter­est­ing people who was trained as a met­eor­o­lo­gist to­wards the end of WWII. I am very thank­ful for meet­ing a bunch of bright char­ac­ters, in­clud­ing yourselves, along the way.

Dav­id: You did very well for me when I was a young­ster. You too, Richard? Mur­ray, you were a won­der­ful role mod­el. You re­main my aca­dem­ic role mod­el. I hope when I’m 80, I’m talk­ing like this and Lor­ie is sit­ting there cor­rect­ing me.

When did you start do­ing re­search in the phys­ic­al sci­ences? I first met you at Bell Labs and you were cer­tainly do­ing it then. In your pa­per on band­lim­ited noise and so on, you al­ways seemed to be talk­ing to a lot of en­gin­eers.

Mur­ray: I don’t know when it really star­ted. Partly it may have been already the time at Brown be­cause of the setup at Brown. There was this sort of ini­tial con­sult­ing setup to­geth­er with the pro­fess­or­ship there, which was a con­tact with Bell Labs. That led to a nice con­tact with Dav­id Slepi­an. Dave and I wrote a pa­per ac­tu­ally on Markov chains with every $$n$$ vari­ables in­de­pend­ent [18]. I don’t know if you ever saw it?

Dav­id: I know that pa­per.

Mur­ray: It was sort of amus­ing that there was someone at Bell Labs, Bela Ju­lesz, who was a fel­low that was in­ter­ested in vis­ion and pat­tern re­cog­ni­tion. Partly it fo­cused on the abil­ity of the hu­man eye to see pat­terns or be able to dis­tin­guish between a ran­dom as­semblage of dots and pat­terns. That ac­tu­ally led to a joint pa­per with Dave.

Richard: That would be in the early ’60s.

Dav­id: Yes, that’s when I met Mur­ray. I was go­ing to ask about the cep­strum ana­lys­is be­cause that was what he was work­ing on, the stat­ist­ics of that. So when you guys came to Lon­don, Mur­ray and I had already met. That Brown sym­posi­um had a bunch of stuff in it.

Richard: Did you at­tend the Brown sym­posi­um?

Dav­id: No. I had fin­ished at Prin­ceton and I had a postdoc in Lon­don. Back then you could take a postdoc and not ever worry about find­ing a job. It was sort of won­der­ful be­cause you got a broad per­spect­ive, but you didn’t ex­pect to make any money, so the two things went to­geth­er.

Richard: Quite a few people went through Bell Labs.

Dav­id: Bell Labs was won­der­ful. It was the best job in my life, but now it’s been des­troyed. It was clearly the best in­dus­tri­al or­gan­iz­a­tion and could equal any math or stat de­part­ment. And you got to work on any­thing you wanted to work on. There were these tre­mend­ous spin-offs; that’s what they were count­ing on. These spin-offs had all these cre­at­ive people work­ing. Slepi­an was one of the won­der­ful guys.

Mur­ray: I ac­tu­ally be­nefited greatly from con­tact with Dave Slepi­an. I also met Stu­art Lloyd, a very bright guy. While I was there I wrote a pa­per [14] on nar­row band pass fil­ter­ing.

Dav­id: That is my fa­vor­ite pa­per of yours ac­tu­ally.

Mur­ray: I don’t know.

Dav­id: No, I do know! It brought up the en­gin­eer­ing in a bright and in­ter­est­ing way. That was a Eureka mo­ment for me.

Mur­ray: Now I be­gin to re­call ac­tu­ally, for ex­ample, the pa­per [4] on strong mix­ing came out while I vis­ited Columbia, even be­fore Bell Labs. I don’t know how I came across it but I was mo­tiv­ated to look at some old pa­pers of Serge Bern­stein — at least the ba­sic tech­niques, such as prov­ing the cent­ral lim­it the­or­em and such, things with the strong mix­ing by break­ing things down to blocks. This goes back to Serge Bern­stein or maybe earli­er.

Dav­id: If I could re­turn to cep­strum ana­lys­is. When I met you, you were down at the Labs and you were look­ing at some stat­ist­ic­al prop­er­ties of cep­strum ana­lys­is. You know, there was this pa­per at Brown by Bogert, Healy and Tukey [e6] where they did all sorts of stuff. And you were look­ing at the stat­ist­ic­al prop­er­ties.

Mur­ray: That was one of the things that happened at Brown. The Of­fice of Nav­al Re­search fun­ded a con­fer­ence on time series ana­lys­is. I and sev­er­al oth­er people were in­volved in help­ing or­gan­ize it. I ed­ited the pro­ceed­ings of the con­fer­ence.

Dav­id: A cli­mactic mo­ment in time series ana­lys­is, that con­fer­ence. In the book there were im­port­ant mile­stones in time series.

Mur­ray: It was a nice op­por­tun­ity to bring things to­geth­er.

Richard: Mur­ray, who at­ten­ded the work­shop at Brown?

Mur­ray: Well it was quite an af­fair. I think I even have a copy of the pro­ceed­ings. I’ll read off the names. There’s a pa­per by Jim Durbin and one by Ted Han­nan, one by someone named Lyt­tkens, (Swedish), M. S. Longuet-Hig­gins, Gor­don Newell, who was at Brown.

Dav­id: He was a nice man. He was at Berke­ley.

Mur­ray: Yes and Dave Slepi­an, and Richard Jones. Then Has­sel­mann, Munk and Mac­Don­ald and Wil­lard Pier­son. I don’t know if you know that name.

Dav­id: Yes, the ocean­o­graph­er.

Mur­ray: OK, Mon­in. I’m not sure if Mon­in was ac­tu­ally there, but he con­trib­uted a pa­per. Manny Par­zen, En­ders Robin­son. Leo Tick, a fel­low called Sirazdinov from the So­viet Uni­on. Then there was this pa­per, Bogert, Healy and Tukey. This was the cep­strum ana­lys­is. And then Wal­ter Freiber­ger, Roy Good­man.

Dav­id: He [Good­man] died too young.

Mur­ray: Yes, he did. OK, then Jen­kins and Kal­li­an­pur, Bill Root, Akiva Ya­glom, I. J. Good, S. O. Rice,1 Ted An­der­son and my­self. It was re­mark­able to get some of these people.

Dav­id: It was every­body, Mur­ray.

Ady: Didn’t some­body say that you mixed too many fields or something?

Mur­ray: I don’t re­mem­ber that. That was later. I did some joint work at UC­SD with a very bright per­son in the en­gin­eer­ing de­part­ment, a very good ex­per­i­ment­al­ist, called Charles Van At­ta. At that point, ONR and NSF must have con­trib­uted a cer­tain amount of money for a con­fer­ence on “Stat­ist­ic­al mod­els and tur­bu­lence [22]. Much of our joint work in­volved Keh-Shin Lii and Ken Hel­land, a stu­dent of Van At­ta. We met up with some re­ac­tion from people in the field, versus people out­side the field. One per­son that was sup­por­ted throughout the years, ap­par­ently very well re­garded in tur­bu­lence, but he was sup­por­ted in­di­vidu­ally by ONR and oth­er agen­cies. I guess he may have felt a com­pet­it­ive as­pect.

Dav­id: I al­ways thought that if one did a study of what would have happened after this con­fer­ence, you would see an in­cred­ible burst of time series activ­ity. Did you make the list of in­vit­ees at this Brown work­shop?

Mur­ray: I’m not sure.

Dav­id: This was so im­port­ant, that con­fer­ence.

Mur­ray: I’m sure I con­trib­uted but oth­er people had to too. I mean, for ex­ample, I wouldn’t have known the work of Has­sel­man, Munk and Mac­Don­ald. Later on when I vis­ited Eng­land, it must have been par­tially a Gug­gen­heim fel­low­ship sup­port and also be ONR. I guess that’s when we met and we worked on those pa­pers on high­er or­der spec­tral es­tim­ates ([20] and [21]).

Dav­id: I met you at Bell labs in, like, ’61 or ’62. It was after that Brown sym­posi­um and they had asked you to work on the dis­tri­bu­tion of the cep­strum es­tim­ate. I re­mem­ber you had some notes be­cause you gave me cop­ies. I don’t think you ever wrote a pa­per on that.

Mur­ray: I don’t think so; I don’t re­mem­ber it. I was led to the pa­per with Dave Slepi­an. There was the pa­per on nar­row-band noise that comes out of the con­tact with Bell Labs and partly with Stu Lloyd, then also there were some pa­pers on something on asymp­tot­ic be­ha­vi­or of ei­gen­val­ues of Toep­litz forms [17].

Dav­id: Slepi­an was def­in­itely work­ing on that.

Mur­ray: There’s a vari­ety of people.

Dav­id: It was a ma­gic place, Bell Labs. It’s just so pathet­ic now.

Mur­ray: I think the break­up of AT&T and its sup­port of Bell labs was a cata­strophe. They claim that phone calls are cheap­er. I think it’s a mess today. I don’t know what you think.

Dav­id: I agree totally.

Mur­ray: I think that the Bell Labs re­search group was su­perb. It was equal if not bet­ter than most aca­dem­ic groups. I don’t know what in­flu­ence Tukey had. I nev­er really had any con­tact with Tukey ex­cept at meet­ings. But I have the feel­ing he prob­ably had a strong in­flu­ence on cer­tain as­pects of Bell Labs. It was a de­light­ful ex­per­i­ence hav­ing the con­tact with Slepi­an. I used to find Slepi­an by go­ing down to the Labs. It was a de­light­ful ex­per­i­ence hav­ing the con­tact. Slepi­an was a great per­son.

Dav­id: Ham­ming was an­oth­er, for ex­ample, when you had com­put­ing prob­lems. He was an­oth­er of those very open guys who I had the feel­ing that if they nev­er wrote a pa­per in their lives they wouldn’t worry about it.

Mur­ray: Ham­ming was sort of an amus­ing char­ac­ter, sort of a jokester I thought. It was sort of sad leav­ing Brown. Any­way, I left Brown and was able to get es­tab­lished here. We’ve been here since then, but I guess most of the group at Brown broke up. The group that I was in was sort of an ap­plied math­em­at­ic­al group, which was fo­cused on some of the clas­sic­al as­pects of ap­plied math­em­at­ics, elasti­city, plas­ti­city, and flu­id mech­an­ics. Well, it’s still ex­ists nom­in­ally, but it ex­ists with an­oth­er group that came in­to place with the group that I was in.

Ady: Wal­ter Freiber­ger comes to mind.

Mur­ray: Well, Wil­li­am Prager was the eld­er states­man of the group ini­tially. Via my daugh­ter one hears an as­pect of aca­dem­ic polit­ics. Now it’s at a dis­tance and as a re­tired per­son, one is re­lieved of con­cerns.

#### Settling into UCSD

Dav­id: You are not re­tired. Haven’t you been do­ing all this emer­it­us stuff? I looked you up on the web and that was one of the things I saw; “Mr. Pres­id­ent” of the emer­iti club.

Mur­ray: Well I got in­volved with that through George Back­us. George Back­us was pres­id­ent of the emer­iti and I guess he didn’t have much suc­cess in per­suad­ing someone else to take over as pres­id­ent so he ap­proached me and per­suaded me.

Dav­id: I got the im­pres­sion that you did it very ser­i­ously. By search­ing on the web un­der Mur­ray Rosen­blatt, the news­let­ter UC­SD Emer­iti Chron­icles March 2002 comes up. You wrote it.

Mur­ray: Oh, that was a nice idea to get re­min­is­cences of people at the be­gin­ning of UC­SD. A fel­low out­side of math­em­at­ics is per­suad­ing people in dif­fer­ent areas to write about their back­ground, what happened at the be­gin­ning. I think it’s a very nice idea and, yes, he per­suaded me. I don’t know if I did a very good job.

Dav­id: It’s very clear that you were at the start of a very im­port­ant uni­versity.

Mur­ray: Oh yes, but it’s amus­ing. I guess some of the ad­min­is­trat­ors at UC­SD per­suaded someone to come in and write the his­tory. At least my view, you know, it’s one of those his­tor­ies that’s greatly dis­tor­ted. My im­pres­sion is that what it said about the math­em­at­ics de­part­ment had noth­ing to do with what ac­tu­ally went on; like­wise, some people at Scripps say it seems to be quite a bit of vari­ance from what they re­mem­ber at Scripps. I’ve been get­ting the in­di­vidu­al re­min­is­cences of people in dif­fer­ent areas. This would be con­sidered a re­mark on the ad­min­is­trat­ive rep­res­ent­a­tion com­pared to the fac­ulty rep­res­ent­a­tions of what took place. Have you ever met up with that?

Dav­id: Oh, totally! Whenev­er there’s a news­pa­per art­icle on something I know about, it’s totally wrong, so it makes me won­der about the art­icles I read that I don’t know any­thing about. Can you be­lieve any of those things? But did you have fun when all of this was go­ing on? You were chair­man and you were right in the cen­ter of a won­der­ful time.

Mur­ray: I was chair­man for one year and got out of it; luck­ily, the per­son who built up the de­part­ment was Stephan Warschawski and he brought me as well as oth­er people in. I guess a few years after he came, he had what was called a heart in­suf­fi­ciency so they per­suaded me to take on the chair­man­ship for a year. I guess the no­tion was to per­suade me to con­tin­ue, but I de­cided, from my ex­per­i­ence from that one year, that one year was enough for me as chair­man.

Ady: Oh, but you loved it.

Dav­id: You prob­ably learned how the sys­tem worked and so all these won­der­ful prob­ab­il­ists came, that prob­ably was no ac­ci­dent I’m sure.

Mur­ray: No, they ac­tu­ally came while I was away in Eng­land. I think people like Ron Getoor, Ad­ri­ano Gar­sia and oth­ers came and I think that’s a trib­ute to Warschawski, ac­tu­ally. But no, it was great. I re­mem­ber com­pos­ing a five-year plan for the de­part­ment and I had the feel­ing that after it was gen­er­ated it prob­ably ended up in a file cab­in­et some­where. The five-year plans, and what later on you saw the de­part­ment would be faced with, were of­ten mu­tu­ally in­con­sist­ent. So I think five-year plans are al­ways gen­er­ated and maybe they have an in­flu­ence and maybe they don’t.

Dav­id: Well, you do something that forces people to think about things, struc­ture things and wheth­er the ac­tu­al words and de­tails mat­ter, I don’t know. I just saw this won­der­ful uni­versity be­ing cre­ated in Cali­for­nia in San Diego and so on. You’re not want­ing to take cred­it for these guys, but I’m sure they were happy to come to San Diego be­cause they had you to talk to and things like that.

Richard: What was the sales pitch to bring all these people out to San Diego? It seemed like a risky ven­ture for an es­tab­lished pro­fess­or to be­come en­gaged with start­ing a new uni­versity. I guess you didn’t have un­der­gradu­ates the first year you were here.

Mur­ray: Scripps was the basis. It had been here for many years. Ini­tially, the claim was the in­sti­tu­tion was ba­sic­ally go­ing to be a gradu­ate uni­versity. That, un­for­tu­nately, dis­ap­peared rap­idly after a few years. So ini­tially that prob­ably per­suaded many people to come. Cer­tain groups may have been en­cour­aged to come to­geth­er; some­times that’s suc­cess­ful if you get a good group. Some­times you don’t get such a good group and it’s not that suc­cess­ful. But of­ten UC­SD had suc­cess. It was already clear the group was break­ing up at Brown. It was clear that cer­tain things wer­en’t that pleas­ant at Brown.

Dav­id: It must have been pretty ex­cit­ing to come out here though: gradu­ate uni­versity, south­ern Cali­for­nia, right where the ocean is?

Ady: It is right where the ocean is. I think that the chair­man Warchar­wski was a very good chair­man. He made people feel at home.

Mur­ray: He was ac­tu­ally the best chair­man I ever had.

Ady: He and his wife were very sweet. They made us feel very at home.

Mur­ray: He was genu­inely con­cerned about build­ing up a good group.

Mur­ray: People like George Back­us and Free­man Gil­bert used time series ana­lys­is to ana­lyze earth­quake data. There was al­ways something in­ter­est­ing go­ing on there, and there was this con­tact with some of the people in en­gin­eer­ing. Don Fredkin, a phys­i­cist, was a co au­thor on some pa­pers with John Rice.

Mur­ray: We’ve settled down here. You know, we wandered around a bit. We went to Chica­go, and then In­di­ana and then Brown.

Ady: And you were at Columbia for a while.

Mur­ray: I was just vis­it­ing for a while. It wasn’t as long as a year, maybe a semester. I sup­pose at some time there may have been some sort of ne­go­ti­ation with Columbia. That was already at the time I was think­ing of leav­ing Chica­go.

Dav­id: That was with Rob­bins and An­der­son.

Mur­ray: At that time, Rob­bins and T. W. An­der­son.

Richard: So the de­vel­op­ment of prob­ab­il­ity at San Diego was more by ac­ci­dent. It wasn’t a con­scious ef­fort?

Mur­ray: I think the busi­ness of try­ing to build up the stat­ist­ics, as well as prob­ab­il­ity the­ory, was a con­scious ef­fort. People like Richard Olshen and John Rice came. Un­for­tu­nately they left, but now we have young­er people. Ac­tu­ally, it’s curi­ous that there’s al­ways this ques­tion of con­tact between the math­em­at­ics de­part­ment and the school of medi­cine. One of the nice and in­ter­est­ing as­pects about UC­SD was that ini­tially there was the no­tion of hav­ing pro­fess­or­ships that would be tied to aca­dem­ic de­part­ments and the med­ic­al school. I think Olshen was in one of them. An­oth­er per­son, who was in prob­ab­il­ity the­ory, had one of these and was ex­pec­ted to in­ter­act with the school of medi­cine re­l­at­ive to bio­s­tat­ist­ics.

Dav­id: People used to talk a lot about the prob­lem of stat­ist­i­cians in a math­em­at­ics de­part­ment. Ba­sic dif­fi­culties with salary and things like that.

Mur­ray: I think there are dif­fi­culties; ini­tially, I was against hav­ing a sep­ar­ate stat­ist­ic­al group. The stat­ist­ic­al group that ex­is­ted was too small and I had the feel­ing the like­li­hood of frac­tur­ing would be very great. I think if you have a large enough group, it’s a great idea, so stat­ist­ics still ex­ists to a cer­tain amount. They’re about four or five ap­point­ments in stat­ist­ics in the math­em­at­ics de­part­ment, but there are a num­ber of ap­point­ments in this com­munity medi­cine de­part­ment, maybe 12 people. It’s a dif­fer­ent sort of af­fair be­cause, I guess, quite of­ten they are not fully fun­ded by the uni­versity. They are fun­ded to a great ex­tent on grants.

Dav­id: I think they are of­ten sup­por­ted on con­tracts not grants, which means they have to agree to do cer­tain things. Cre­at­ive people don’t want to be in that situ­ation.

Mur­ray: I think that’s a great idea if the prob­lems are in­ter­est­ing and if they can write them up. There are greatest dif­fi­culties with their ad­vance­ment. Of­ten, the doc­tors give them only mar­gin­al cred­it in pa­pers writ­ten jointly. So, if the stat­ist­ic­al ques­tion is suf­fi­ciently in­ter­est­ing, they should write up pa­pers sep­ar­ately. The com­munity medi­cine de­part­ment and med­ic­al school has had to ap­point some stat­ist­ic­al types be­cause I sus­pect Wash­ing­ton de­mands it now to get some sort of stat­ist­ic­al con­firm­a­tion of med­ic­al ad­vances.

Dav­id: What about the eco­nom­ics crowd? Granger, he’s been here for a long time.

Mur­ray: I haven’t had any great con­tact with Granger. I’ve been on com­mit­tees of their gradu­ate stu­dents. I’ve been to some of their sem­inars. I feel that eco­nom­ics is a dif­fi­cult area, really, not in terms of the the­or­ies that ex­ist, but in terms of what really hap­pens. Maybe I’ve al­ways had a cer­tain bi­as re­l­at­ive to this no­tion of the “ra­tion­al man.” I think it’s so far from what ac­tu­ally hap­pens in real life, I mean, after all the talk of ideal­ized free en­ter­prise. I think it’s rare that any­one sees any­thing like that in ac­tu­al prac­tice. I should say also I think eco­nom­ics is a very in­ter­est­ing field. However, get­ting back to what I was say­ing, one has these in­dir­ect in­vest­ments, you know, in re­tire­ment plans and when one looks at what goes on with these com­pan­ies, with these CEOs, the ma­nip­u­la­tion of the mar­ket, it sounds so dif­fer­ent from all these ideal­ized mod­els. I don’t know if you have a sim­il­ar feel­ing, the both of you.

Richard: I don’t think about it in quite those terms — I just like to look at the ac­tu­al time series data!

Mur­ray: The ac­tu­al time series, not loc­ally but glob­ally, has to be in­flu­enced some­times by these af­fairs. In fact, I think it would be in­ter­est­ing, really in­ter­est­ing, to have someone ana­lyze just what takes place in the stock mar­ket due to the in­ter­ven­tion of these CEOs. That would be a re­mark­able af­fair be­cause I think that would get you a little closer to what some­times does take place. I’m not sure it would be that dif­fi­cult to do either.

Richard: I found that with some ex­amples — maybe I told you about this — but if you try to fit something like an ARMA mod­el, some­times you get these non­caus­al sorts of mod­els that may be sug­gest­ing some type of an­ti­cip­at­ory ac­tion. In ex­amples I’ve seen, something like the volume shares for Mi­crosoft, you get this be­ha­vi­or. Mi­crosoft is in the news a lot so you can see why this might hap­pen. For oth­er series you may not see it.

Mur­ray: I think you are ab­so­lutely right. That’s not a cri­ti­cism but an in­dic­a­tion that some of the em­phas­is on caus­al­ity maybe be over­done oc­ca­sion­ally. So my feel­ing is that eco­nom­ics in cer­tain ways is more dif­fi­cult than oth­er fields if you really want to get in­sight in­to it.

Dav­id: For ex­ample, there is Debreu. He did meas­ure the­ory, really. He got the No­bel Prize in eco­nom­ics for do­ing it. There’s a whole crowd of them that I don’t know about, so they keep away from this real world.

Mur­ray: Ar­row’s work was on this vot­ing sys­tem but you know the ac­tu­al as­pects of vot­ing as they really take place are quite dif­fer­ent. I’m not de­cry­ing any field, it just seems to be a field where oc­ca­sion­ally there’s a big dif­fer­ence between the ideal­ized mod­els, which are great, but that seem to be taken ser­i­ously as com­pared to what ac­tu­ally takes place.

Richard: What do you think about the de­vel­op­ment of some of these mod­els in fin­an­cial time series? It seems to be dom­in­at­ing the field right now.

Mur­ray: They are help­ful as norm­at­ive mod­els if you want to set some mark, but in some of the well-known cases where you go out­side of the as­sump­tions of the mod­el, and the sys­tem blows up. For ex­ample, the dif­fi­culties with the hedge funds, right? It’s clear some of the ba­sic as­sump­tions simply wer­en’t sat­is­fied.

Richard: Do you have any thoughts about things like GARCH mod­els at all, or stochast­ic volat­il­ity mod­els which are used for mod­el­ing fin­an­cial time series?

Mur­ray: I’m really not ex­per­i­enced but I think they are in­ter­est­ing. How suc­cess­ful are the GARCH mod­els?

Richard: That’s a good ques­tion. A lot of people be­lieve that they have severe lim­it­a­tions. On the oth­er hand, they cap­ture some as­pects of high­er-or­der mo­ment struc­tures in a cer­tain way be­cause the data are un­cor­rel­ated yet de­pend­ent.

Mur­ray: You’ve been deal­ing with some mod­els which aren’t non­lin­ear but cap­ture some as­pect?

Richard: Yes, the all pass mod­els cap­ture some of the same fea­tures but not as clev­erly as the GARCH mod­el. People in fin­ance seem to like GARCH mod­els; they seem to work and tap in­to some es­sen­tial fea­tures, surely not everything.

Mur­ray: I’m sure you’re right. One of the as­pects of eco­nom­ics is that there are large polit­ic­al factors. A firm is go­ing to like it, par­tic­u­larly if you’re good at pub­lic re­la­tions, you can sell your ideas. You can say this is the way to do things. A firm may be will­ing to pay you a con­sid­er­able amount of money. And as you say, it’s some­times dif­fi­cult to see wheth­er what you’re selling ac­tu­ally rep­res­ents what is tak­ing place.

Richard: I think there’s been some cross-fer­til­iz­a­tion as well from eco­nom­ics back in­to main­stream time series. Now we in­spect re­sid­uals for not just be­ing un­cor­rel­ated but for ad­di­tion­al non­lin­ear­ity such as volat­il­ity as mani­fes­ted by cor­rel­a­tions in ab­so­lute val­ues and squares of the re­sid­uals. We don’t only look for these fea­tures, but also at­tempt to mod­el them as well.

Mur­ray: But you see there are as­pects of that already pos­sibly mo­tiv­ated by an­oth­er area, long be­fore eco­nom­ics.

Richard: This may be something I don’t know about.

Mur­ray: Be­cause there is the phe­nomen­on of something called in­ter­mit­tency in tur­bu­lence. And what is in­ter­mit­tency, but what you refer to from an­oth­er point of view…what was the term you used?

Richard: Volat­il­ity?

Mur­ray: Volat­il­ity. Be­cause if you’re look­ing at time series some­times it’s look­ing at glob­al struc­ture versus loc­al struc­ture and the in­ter­mit­tency cor­res­ponds some­times to what hap­pens loc­ally. There’s this heur­ist­ic and the­ory which people try to form­al­ize that prob­ably ori­gin­ated with Kolmogorov — the no­tion of what’s some­times called the en­ergy cas­cade in tur­bu­lence, which is looked at from a point of view typ­ic­ally of a spec­tral ana­lys­is. The no­tion is that the ini­tial en­ergy in­put is at low fre­quency and then it cas­cades down as it’s trans­ferred non­lin­early. Of course, tur­bu­lence is a three-di­men­sion­al phe­nomen­on and eco­nom­ic fluc­tu­ations are usu­ally ana­lyzed one di­men­sion­ally. And the ba­sic dis­sip­a­tion of en­ergy is sup­posed to be tak­ing place at the high fre­quency; this is all sort of heur­ist­ic. People have tried to form­al­ize this in vari­ous ways but it has its dif­fi­culties. Some people look at ran­dom solu­tions of non­lin­ear equa­tions of mo­tion from a mo­ment point of view. You get an end­less se­quence of linked mo­ment equa­tions but you try to trun­cate them at third or fourth or­der. Isn’t this a little re­min­is­cent of the idea be­hind the GARCH mod­el in a cer­tain sense?

Dav­id: Yes, one has moved to fourth or­der mo­ments.

Mur­ray: And these ideas have been go­ing around for dec­ades. One won­ders wheth­er Engle in his work on GARCH may have been par­tially mo­tiv­ated by ideas from phys­ics (tur­bu­lence).

Dav­id: Can I try an idea on you, be­cause I think I un­der­stand what you’re say­ing? People would bring me time series data and I would think, yes, maybe ARMA and it looks sta­tion­ary and they are a sort of class of mod­els that I have if someone was ask­ing me and I didn’t have the time to do com­put­ing and so on, I would say why don’t you try the ARMA pack­age? So there would be ser­i­ous people com­ing to me and they would show this thing we now call volat­il­ity, or in­ter­mit­tency and so on. And now with all this GARCH stuff there’s a whole pack­age of pro­grams I can dir­ect people to and they’ve got re­sid­uals and they’re fore­cast­ing things. So now there’s a whole class of wig­gly lines de­veloped in your pack­age with Brock­well that we are work­ing with. On a re­lated top­ic, I feel it must be frus­trat­ing to eco­no­met­ri­cians be­cause they have all these clev­er ideas with these very dif­fi­cult prob­lems and of­ten none of them seem to work with their data. So it is the people in oth­er fields who are the ones that take ad­vant­age of their clev­er ideas. It’s good they ex­ist, but so be it.

Mur­ray: One of the dif­fi­culties with eco­nom­ics too is the data. This ad­min­is­tra­tion of the young­er Bush is not in­ter­ested in pro­du­cing good data. They’re in­ter­ested in pro­du­cing data that agrees with their dog­mat­ic no­tions.

Richard: I’m not sure how hard it is to get the data.

Dav­id: I get some data.

Mur­ray: You can get data, but is the data de­cent?

Richard: I’m not sure that would be the is­sue for me, it’s just the in­ter­pret­a­tion and the con­clu­sions that are drawn from the data. I don’t know if they changed their data col­lect­ing meth­ods to ad­here to a change in policy.

Dav­id: They’ve changed defin­i­tions.

Mur­ray: Cost-of-liv­ing data is ob­vi­ously, at cer­tain points, highly polit­ic­ally ma­nip­u­lated. Maybe stuff like stock mar­ket data is bet­ter but we don’t know. We have open ques­tions about CEOs and the stock mar­ket.

Dav­id: Clive Granger has come up with a lot of neat ideas. He’s been in the stat­ist­ics we teach and had an im­pact.

Mur­ray: How suc­cess­ful do you think these mod­els like GARCH have been?

Richard: I think they do a fair job. The mod­els have prob­lems be­cause there’s a de­pend­ence is­sue and there’s a heavy tail is­sue, which are in­ex­tric­ably linked in GARCH mod­els.

Mur­ray: The heavy tailed as­pect, you men­tioned. I was just think­ing about these things. I was harp­ing on the ef­fect of CEOs. You have the busi­ness of CEOs mak­ing their bar­gain in selling a com­pany. Quite of­ten it’s tied up to tre­mend­ous be­ne­fits to these CEOs once the com­pany is sold, the be­ne­fits are a non­trivi­al frac­tion of the value of the com­pany. Now it seems to me when things like that hap­pen they’ve got to have a long ef­fect.

Richard: I think it’s a hard prob­lem try­ing to mod­el these types of volat­il­it­ies dir­ectly from the re­turns, be­cause there’s not a lot of de­pend­ence in it. We don’t have tools to try to find this kind of de­pend­ence.

Mur­ray: I think you al­most have to try to look at some as­pect of the sys­tem in terms of ex­plan­a­tion, as it ex­ists today. I think today we spend a lot of time deny­ing cer­tain as­pects of the sys­tem claim­ing it works in cer­tain ways while it doesn’t, and we don’t quite know why it doesn’t work. We know that cer­tain pe­cu­li­ar things take place but we don’t know the full ex­tent of the de­tailed mech­an­ism.

Richard: I am not sur­prised that this would be your ap­proach to this prob­lem. With your back­ground in en­gin­eer­ing and the phys­ic­al sci­ences, you really want to mod­el the sys­tem and un­der­stand what’s hap­pen­ing there. In oth­er cases, however, one might at­tempt to mod­el the data without re­gard to some phys­ic­al sys­tem or the in­ter­ac­tions driv­ing the mod­el spe­cific­a­tion.

Dav­id: Which are you, Richard?

Richard: I think I’m the lat­ter be­cause I’m just not smart enough to fig­ure out the phys­ic­al sys­tem as­pect.

Dav­id: The sci­ent­ist wants to un­der­stand things and so on. Maybe I’m at­tack­ing Tukey here, at least his ex­plor­at­ory data ana­lys­is. I’m with Mur­ray here. The way you de­scribed it, Mur­ray, that’s sort of my motto.

Richard: I’d like to be like Mur­ray too, but I’m afraid I am not clev­er enough. I think a lot of these cases in eco­nom­ics — it’s just not go­ing to work that way. That’s what I think is the beauty of stat­ist­ics. You can of­ten use a stochast­ic mod­el for cer­tain phe­nom­ena which can do a cred­ible job as a proxy for de­scrib­ing a phys­ic­al sys­tem.

Dav­id: It would be nice to know who made money do­ing this from the GARCH stuff let’s say, not by writ­ing a book or not by cheat­ing, but they really made money with the straight GARCH stuff.

Mur­ray: I’m sure people have made money. I think the GARCH mod­els have be­come very pop­u­lar with the stochast­ic dif­fer­en­tial equa­tions. I think they set cer­tain levels that are not too bad if you don’t take it too ser­i­ously. But I think if you push it too far then the thing can blow up.

Richard: I think that’s the prob­lem with non­s­tat­ist­i­cians. They take the mod­els too ser­i­ously and make more of it than there really is — it’s only a mod­el.

Mur­ray: Eco­nom­ics is an area where one can make lots of money. I can say even from an aca­dem­ic point of view and I give kudos to the eco­nom­ists for they will get among the highest salar­ies in the aca­dem­ic area. We can ar­gue how reas­on­able is it or not but they’re ef­fect­ive in selling; in busi­ness they’re will­ing to pay a good deal of money.

Dav­id: What got you do­ing spec­tral ana­lys­is?

Mur­ray: It was ba­sic­ally the con­tact with Gren­ander.

Dav­id: So Gren­ander was already do­ing spec­tral ana­lys­is?

Mur­ray: I don’t know when. He ob­vi­ously was in­ter­ested in time series from the be­gin­ning and his thes­is, I think, was on what sort of dis­crim­in­a­tion prob­lems there are for time series, par­tic­u­larly para­met­ric forms…. That’s the things they cred­ited him for later on, and sieves. Well, the idea of the meth­od of sieves. If you really want to look back, it is really in that pa­per [e3]. You know the back­ground that’s something I got to know by con­tact with Akiva Ya­glom. The back­ground on spec­trum ana­lys­is is really quite amus­ing his­tor­ic­ally be­cause an ini­tial heur­ist­ic idea was ac­tu­ally in an old pa­per by Al­bert Ein­stein, about 1914, in a Swiss journ­al [e1].

#### Favorite papers

Richard: What is your fa­vor­ite pa­per that you’ve writ­ten?

Mur­ray: I’ve nev­er thought about that.

Richard: You don’t rate them, like a top five?

Mur­ray: I’ll tell you, the thing that as­ton­ishes me is that some of the pa­pers that you think wer­en’t par­tic­u­larly out­stand­ing drew some of the greatest in­terest and some of the pa­pers that seemed very in­ter­est­ing, didn’t. I can’t say I’m a good judge.

Richard: This is good to hear from you, ac­tu­ally.

Dav­id: Some of these pa­pers that people haven’t really ap­pre­ci­ated, well, 20 years from now all of a sud­den they may.

Mur­ray: That’s an op­tim­ist­ic point of view.

Richard: Maybe I can ask you about one with Keh-Shin Lii deal­ing with high­er or­der spec­tra and non-Gaus­si­an time series?

Mur­ray: Well, I can tell you about one thing that in view of my com­ments was amus­ing. It was a pa­per ac­cep­ted by the An­nals of Stat­ist­ics [23]. For some non-Gaus­si­an mod­els, you can­not es­tim­ate the para­met­ers con­sist­ently us­ing meth­ods de­vised un­der the Gaus­si­an as­sump­tion. You had to ad­apt the pro­ced­ure. I think we used high­er-or­der spec­tra es­tim­ates to con­sider some of these para­met­ers. And what was in­ter­est­ing look­ing back was a pos­it­ive dir­ec­tion on the part of the An­nals. The ini­tial re­ac­tion seemed to be, from the ref­er­ee, that this pa­per is all wrong and should be re­jec­ted. Ap­par­ently this per­son had nev­er heard of non-Gaus­si­an pro­cesses or the fact that meth­ods de­vised un­der the as­sump­tion of a Gaus­si­an may not be able to es­tim­ate para­met­ers con­sist­ently if the pro­cess is not Gaus­si­an. You could think in­tu­it­ively that the Gaus­si­an pro­cess works in all cases. We had to write a very de­tailed re­ac­tion. We didn’t want to re­act in a hec­tic, pas­sion­ate way; we tried to ex­plain in great de­tail what the situ­ation was and I guess even­tu­ally through an in­ter­me­di­ate ex­change, one has to say maybe this is a trib­ute to the An­nals of that day, we man­aged to sway the ed­it­or­i­al board. You know, there’s something to this pa­per and it’s not a situ­ation where all clas­sic­al tech­niques work. It’s a very com­mon re­ac­tion, par­tic­u­larly with some well-es­tab­lished journ­als which are used to some sort of stand­ard­ized pro­ced­ures for cer­tain fa­vor­ite fields. That if you come up with a pro­ced­ure which doesn’t sound usu­al or re­min­is­cent of the typ­ic­al pro­ced­ures and it doesn’t sound fa­mil­i­ar, quite of­ten the re­ac­tion is there has to be something wrong and it can’t be very in­ter­est­ing or whatever. I think for that reas­on, many journ­als end up pub­lish­ing some pa­pers which I’m sure are tech­nic­ally very good but not very in­ter­est­ing and miss out on some of the most in­ter­est­ing pa­pers. I don’t know if any of you had any sim­il­ar ex­per­i­ences.

Richard: Of­ten one does re­ceive a num­ber of pa­pers to re­view from people who are not ex­perts in the area and it’s not al­ways easy to make a quick de­term­in­a­tion about the pa­per’s qual­ity and sig­ni­fic­ance. It can be a dif­fi­cult chore to fil­ter the pa­pers without much sub­stance from those that are mak­ing sig­ni­fic­ant ad­vances.

Mur­ray: Right, right.

Richard: It’s not so easy some­times.

Mur­ray: What hap­pens in many of these cases — for­get about any proofs or any­thing of that sort — does the per­son read the state­ment of res­ults and does he try to un­der­stand them? In many cases I have found out that is not done, though.

Richard: I think you are right.

Dav­id: I think some people have an at­ti­tude that if they are not un­der­stand­ing this by cas­u­al read­ing then the au­thor is not a good ex­pos­it­or.

Mur­ray: Prob­ably, or it’s a re­ac­tion — this sounds so dif­fer­ent and I know a reas­on­able amount about the field that it’s got to be wrong since it doesn’t sound fa­mil­i­ar.

Ady: How many years did it take to get that pa­per for­ward?

Mur­ray: That took about two years.

Richard: Is this the one about de­con­vo­lu­tion and es­tim­a­tion?

Mur­ray: Yes.

Richard: That par­tic­u­lar pa­per has had a fairly large im­pact.

Mur­ray: Ac­tu­ally, I think part of it relates to an idea that Dav­id had at some time in his pa­per on the iden­ti­fic­a­tion of a non­lin­ear time series sys­tem [e7]. Part of the en­gin­eer­ing com­munity took up these ideas. There was an­oth­er pa­per that ap­peared in the An­nals later. The as­so­ci­ate ed­it­or didn’t think it was in­ter­est­ing, but a few of the oth­er ed­it­ors did. I don’t think the as­so­ci­ate ed­it­or un­der­stood what was go­ing on.

Richard: One thing that seems pre­val­ent in al­most all of your re­search pa­pers is the ques­tion, “What if?” You seem to ask this ques­tion all the time. What if these con­di­tions aren’t true or what hap­pens in this situ­ation? Non­stand­ard situ­ations of­ten seem to have gen­er­ated very in­ter­est­ing prob­lems.

Mur­ray: Well, I think that is only part of the reas­on, even though I do play that game oc­ca­sion­ally. I don’t think that I am well ori­ented to look­ing at a ques­tion which is su­per well-defined and want­ing to get the best con­di­tions pos­sible for it. Prob­ably even the best con­di­tions de­pend on how you phrase the af­fair. It seems to me it’s more in­ter­est­ing to look a bit more broadly and see what gen­er­ally goes on.

Dav­id: The way I would re­ph­rase what you were say­ing is that Mur­ray is good at look­ing for counter­examples. You find something and then you are look­ing for a counter­example. I don’t mean for the res­ult that you prove, but sup­pose you weak­en the as­sump­tion, you have your as­sump­tion, what is your counter­example?

Mur­ray: It’s not ne­ces­sar­ily look­ing for a counter­example but there’s one case where I even­tu­ally looked for a counter­example be­liev­ing I wouldn’t find it, which as­ton­ished me and that’s a case that’s still open. There’s some work of Wien­er’s that he ex­pos­ited in a little book on non­lin­ear meth­ods. I think he did some work by him­self and some work with Kal­li­an­pur [e4]. One of the ideas that he had was that un­der cer­tain con­di­tions, strong enough con­di­tions, you could en­code a sta­tion­ary stochast­ic pro­cess as a one-sided func­tion of a se­quence of in­de­pend­ent ran­dom vari­ables. That’s a more strin­gent ver­sion of a prob­lem dealt with by people like Orn­stein, Kolmogorov and Sinai. It’s some­times called the iso­morph­ism prob­lem where you get con­di­tions for the pro­cess as a two-sided func­tion of a se­quence of in­de­pend­ent and identic­ally dis­trib­uted ran­dom vari­ables. I guess Wien­er had cer­tain con­di­tions that I thought looked reas­on­able — and maybe something like that is still ap­pro­pri­ate — but the way he for­mu­lated them the con­di­tions are not suf­fi­cient. That led to something like the as­sump­tion that the series has a trivi­al tail field. So I thought I might write something up along that line, but I thought first let me see what hap­pens. I was able to con­struct a case where those con­di­tions are true but you can’t do it as he thought. Today, I think a trivi­al back­ward tail field without an ad­di­tion­al con­di­tion is not suf­fi­cient for do­ing something like that. It is still a very in­ter­est­ing ques­tion to find ne­ces­sary and suf­fi­cient con­di­tions for such a rep­res­ent­a­tion.

Richard: Is this the sta­tion­ary Markov chain pa­per?

Mur­ray: It is one of the pa­pers around that time, but I am of­fi­cially so far gone that I don’t re­mem­ber that far back.

Richard: Your memory is ac­tu­ally in­cred­ible to me. You can even re­call go­ing up to the board to do a prob­lem in one of your classes at CCNY!

Mur­ray: Some things make a strong im­pres­sion.

Richard: In the 1960 in­de­pend­ence and de­pend­ence pa­per [15], you pro­duce an ex­ample con­sist­ing of squar­ing a Gaus­si­an pro­cess and re­cen­ter­ing by sub­tract­ing one.

Mur­ray: Yes, that pro­cess is not strong mix­ing since you do not have asymp­tot­ic nor­mal­ity of the par­tial sums.

Richard: Not strong mix­ing since you don’t get a cent­ral lim­it the­or­em.

Mur­ray: There was a pa­per I wrote called “Sta­tion­ary Markov chains and in­de­pend­ent ran­dom vari­ables” [12]. And I think that’s when I showed that a ne­ces­sary and suf­fi­cient con­di­tion for a sta­tion­ary count­able state Markov chain to have such a one-sided rep­res­ent­a­tion is that it be mix­ing. In my pa­per “Sta­tion­ary pro­cesses as shifts of func­tions of in­de­pend­ent ran­dom vari­ables” [10], I gen­er­ate a Markov chain for which the Wien­er con­struc­tion doesn’t yield a one-sided rep­res­ent­a­tion, but if you reen­code the pro­cess slightly, in an ap­pro­pri­ate way, you would be able to. So, as I say, it’s still an in­ter­est­ing open ques­tion for Markov pro­cesses.

Dav­id: Are you still work­ing on that ques­tion?

Mur­ray: Every so of­ten I re­turn to it.

Richard: What about the pa­per where you in­tro­duce strong mix­ing and prove a cent­ral lim­it the­or­em?

Mur­ray: The strong mix­ing was slightly in­cor­rectly for­mu­lated. I cor­rec­ted it in a fol­low­ing joint pa­per with Blum [6], but the ba­sic ar­gu­ment for the strong mix­ing for the cent­ral lim­it the­or­em is giv­en there.

Dav­id: Richard, there’s a pa­per a fel­low wrote about a new mix­ing con­di­tion and Mur­ray then showed the only pro­cess that sat­is­fied it was the se­quence of in­de­pend­ent identic­ally dis­trib­uted ran­dom vari­ables.

Richard: Yes, this sounds very fa­mil­i­ar.

Mur­ray: Richard Brad­ley then got an ex­ten­sion of my com­ments. I tried to sug­gest to this per­son that there was something strange about his con­di­tion. I just don’t think he re­acted to it and I didn’t know what else to do.

Dav­id: Well that’s what Richard did to start us all off in this sec­tion of the in­ter­view, “What if?”

Mur­ray: It was already clear when you have the Rus­si­an school with a back­ground of Kolmogorov and these young­er people — and re­mem­ber the Rus­si­ans had quite a school con­cerned with tur­bu­lence with Mon­in and Kolmogorov primar­ily and people like Ya­glom who had been in­ter­ested from that date. So, it was clear already that there were these ap­plic­a­tions in terms of stochast­ic meth­ods. They might not have used spec­tra to a great de­gree but already people had the ex­pos­ure to people like Bart­lett go­ing back to ’46–47. From that point on, he — Bart­lett — pur­sued that area. When I vis­ited Lon­don, I vis­ited Bart­lett’s de­part­ment, which, ac­tu­ally, I found de­light­ful be­cause Bart­lett was a mod­er­ately form­al guy but a very in­ter­est­ing per­son to talk to.

Richard: I think it’s in­cred­ible that in 1956 you have these two pa­pers on the cent­ral lim­it the­or­em un­der strong mix­ing, and then this dens­ity es­tim­a­tion pa­per. Did you ever ima­gine the im­pact that these two pa­pers would have in the field?

Mur­ray: I guess I wrote this pa­per on the cent­ral lim­it the­or­em and strong mix­ing con­di­tion while I was vis­it­ing Columbia be­cause I’m cred­it­ing Columbia at that time, 1956. I’ve got to say that, ac­tu­ally, the people who really did ap­pre­ci­ate it were not people in the States ini­tially, they were the Rus­si­ans. The pa­per that ob­vi­ously took off was that of Kolmogorov and Roz­an­ov [e5]. It is the first pa­per that con­siders, in the Gaus­si­an case, what are suf­fi­cient con­di­tions for strong mix­ing. Later on it led to this work in Four­i­er ana­lys­is of Hel­son and Sara­son where they got the ne­ces­sary and suf­fi­cient con­di­tions for strong mix­ing in the case of the Gaus­si­an sta­tion­ary pro­cess. Dav­id, these are two col­leagues of yours. I vis­ited the So­viet Uni­on around 1963 and met a num­ber of Rus­si­an prob­ab­il­ists. I have pic­tures of Ya­glom, Sinai and Shiry­aev.

Richard: So they were keen on the cent­ral lim­it the­or­em un­der strong mix­ing pa­per?

Mur­ray: Well, the pa­per, I’m try­ing to re­mem­ber when the pa­per of Kolmogorov and Roz­an­ov ap­peared. My pa­per was about 1956 and I think their pa­per was around 1960. I think there was a whole group of Rus­si­ans, not ne­ces­sar­ily con­cerned with my ver­sion of the mix­ing con­di­tions, but oth­ers like Ibra­gimov were con­cerned with mix­ing con­di­tions sug­ges­ted by Kolmogorov. You see this in the book [e8] by Ibra­gimov and Roz­an­ov, Gaus­si­an Ran­dom Pro­cesses, a dis­cus­sion of Kolmogorov con­di­tions and strong mix­ing con­di­tions. So there was a good deal of activ­ity at that time in the Rus­si­an school. There seemed to be more ap­pre­ci­ation or more re­ac­tion in the Rus­si­an school than in the States to­wards my mix­ing pa­per.

Richard: Your book [7] with Gren­ander on time series had a huge im­pact on the growth and de­vel­op­ment of time series. It seemed to be way ahead of its time.

Mur­ray: Did it? I really don’t know how much im­pact it had.

Dav­id: I think it was enorm­ous. I had been in these en­gin­eers’ of­fices and they had the book — Larry Stark, for ex­ample. People like that.

Mur­ray: I see. I thought it was un­for­tu­nate that it took 3–4 years.

Richard: Even so, it just seems like the res­ults in there, in­clud­ing re­gres­sion with time series er­rors that Gren­ander and you de­veloped, are still quite timely.

Mur­ray: Oh sure, you have to re­mem­ber that part of that stuff on re­gres­sion ana­lys­is ini­tially star­ted in some of Gren­ander’s earli­er work. I think I car­ried on some of it to the mul­tivari­ate case and some pa­pers bey­ond that. But no, my thought was the book could have quite a bit of an im­pact. It did take sev­er­al ad­di­tion­al years to get the book in print.

That’s why my feel­ing, re­l­at­ive to pa­pers — and by the way, that is a cri­ti­cism I would have of the present day of ref­er­ee­ing in what I see in journ­als. Par­tic­u­larly, in some journ­als that claim to take them­selves very ser­i­ously. It seems to me, in present day, the re­view­ing time, at least in our field, is in­cred­ibly long and worse than that when you get the pa­per back it’s not usu­ally ref­er­eed really in the old sense — in some cases the per­son doesn’t even read the state­ment of the res­ults. It seems to me if you re­ceive a pa­per, you can de­cide if it’s in­ter­est­ing or not rather quickly. If the pa­per is un­in­ter­est­ing you can send it right back in a fairly short time.

Dav­id: No one gets harmed when you do that.

Mur­ray: That’s right, and if you think it may have some in­terest, then loc­al­ize the in­terest and make some pos­it­ive sug­ges­tions or cri­ti­cism, but get it out in a reas­on­able time and don’t make cri­ti­cisms on trivia that don’t amount to any­thing.

Dav­id: Mur­ray, may I agree with that but also dis­agree? I get more let­ters “Would you please ref­er­ee and send a re­port with­in 6 weeks” and they are ser­i­ous.

Mur­ray: I get them too, but what I do is I’ll say right off the bat I can­not ref­er­ee with­in 6 weeks and I could pos­sibly do it in 3–4 months. I think I can do that and I will try to do it, but it seems to me if I can’t do it I re­turn the pa­per to the ed­it­or im­me­di­ately.

Dav­id: I think a lot of young people get very dis­cour­aged at the be­gin­ning be­cause this hap­pens. A lot of people nev­er pub­lish their thes­is for ex­ample. Some people only pub­lish their thes­is.

#### Miscellaneous musings

Dav­id: Mur­ray, I have a ques­tion. Richard made some notes on the pa­pers, but maybe you don’t want to an­swer it. I was just curi­ous wheth­er the field of stat­ist­ics has an im­pact on the Na­tion­al Academy of Sci­ences. You were elec­ted a mem­ber and that was a very im­port­ant hon­or. I am in­ter­ested in the polit­ics of the Amer­ic­an scene and stat­ist­ics.

Mur­ray: Well, I might be able to give some per­spect­ive, but there are oth­ers who per­haps could provide more in­sight.

Dav­id: I was won­der­ing what your ex­per­i­ence has been. Was it mainly an hon­or to be elec­ted?

Mur­ray: It cer­tainly is an hon­or. It gives you the feel­ing of in­ter­ac­tion with dif­fer­ent fields. I think Tukey is one of the earli­est people elec­ted.

Richard:People in prob­ab­il­ity and stat­ist­ics, you mean?

Mur­ray: Right. Prob­ably the earli­est per­son I know of, ob­vi­ously a guy who claimed to be a stat­ist­i­cian would be Tukey and I would guess that he may have played an im­port­ant role. You see every so of­ten what the Academy does. There are cer­tain sig­ni­fic­ant fields which are un­rep­res­en­ted or min­im­ally rep­res­en­ted so they may in­crease the num­ber of slots and they may have spe­cial nom­in­at­ing com­mit­tees. I think it’s clear there must have been an im­petus at cer­tain times for stat­ist­ics and prob­ab­il­ity. Doob and Feller were elec­ted in 1957 and 1960; Tukey and Ney­man were elec­ted in 1961 and 1963.

I’m a sit­ting dod­der­ing char­ac­ter, but it seems to me that things like time series ana­lys­is and the ana­lys­is of such data should be taught in stat­ist­ics de­part­ments very broadly. Today they are not, or am I wrong?

Dav­id: No, no you’re right. Manny Par­zen has said there are all these very bright stat­ist­i­cians who say time series is hard and seem sort of proud not know­ing any­thing about time series. I have run in­to this at­ti­tude my­self.

Mur­ray: That’s a pity, and I think one wants in­ter­ac­tion between time series ana­lys­is people, the en­gin­eer­ing types and also the types in bio­logy. For ex­ample, John Rice and Don Fredkin dealt with cer­tain ba­sic prob­ab­il­ist­ic mod­els us­ing time series mod­els in help­ing them ana­lyze the data in a bio­s­tat­ist­ic­al con­text. The con­text with bioin­form­at­ics is great and this leads to in­creas­ing types of mod­els. I think it’s un­for­tu­nate that in many of the de­part­ments they re­strict them­selves to only the clas­sic­al mod­els.

Dav­id: I think the bright stu­dents are just go­ing to buy the book. They’re not go­ing to waste the whole semester sit­ting in class when something’s in a book.

Ady: They can get the book and go through it on their own.

Mur­ray: I think they lose lots of stu­dents who might be very good. Richard, I don’t think you de­cided be­fore­hand what you wanted to go in­to in terms of ease.

Richard: I’m prob­ably not a good ex­ample, though, for a typ­ic­al stu­dent.

Mur­ray: No, I’m not try­ing to put you forth as a typ­ic­al stu­dent.

Richard: I could have eas­ily done non­com­mut­at­ive ring the­ory or something like that!

Mur­ray: Dav­id, what’s your own re­ac­tion to things like data min­ing?

Dav­id: I am go­ing to join them.

Mur­ray: Oh.

Dav­id: The data miners are be­ing sneered at by some of the stand­ard stat­ist­i­cians in some cases.

Mur­ray: Are they? I hadn’t heard that.

Dav­id: I think we stat­ist­i­cians want to join the guys do­ing this stuff in com­puter sci­ence de­part­ments or wherever, oth­er­wise our field is go­ing to lose out.

Mur­ray: Well there’s go­ing to be all sorts of dis­agree­ment. Wit­ness the re­ac­tion to people say­ing, you know, you’re wast­ing time if you do have a great deal of com­pu­ta­tion. Then the no­tion of ex­clus­ive­ness arises and you are go­ing to have the same dif­fi­culties. It’s only if you can have some sort of reas­on­able in­ter­change that both fields can be­ne­fit.

Dav­id: I think there are a lot of prob­lems out there that people want solu­tions to. So I don’t think that’s what go­ing on with the data miners. The stat­ist­i­cians may choose not to join them but I don’t think that the data miners are go­ing to res­ist people com­ing in and help­ing them with these big data sets.

Mur­ray: It’ll be in­ter­est­ing to find that. It’s in­ter­est­ing to see what’s happened in bioin­form­at­ics. I’m sure some part of it may be very bad, but some part of it sounds really quite ex­cit­ing and in­ter­est­ing. And from a broad­er point of view maybe more ima­gin­at­ive and in­volving more in­ter­ac­tion with the sub­ject mat­ter. For ex­ample, for some dis­eases they have little idea of what to do. But in cases with power­ful stat­ist­ic­al tech­niques they can fil­ter out, from some of these mi­croar­rays, a few factors that seem to be rel­ev­ant. Any­way, what I wanted to say at the be­gin­ning is that I ac­tu­ally have had in­dir­ectly some as­so­ci­ation with ap­plic­a­tions of prob­ab­il­ity the­ory and stat­ist­ics.

Richard: I don’t think oth­er fields res­ist stat­ist­i­cians com­ing in to join them in their re­search. For the most part, they wel­come the as­sist­ance — it is of­ten viewed as an add-on to their sci­ence. In many cases, cer­tainly not all, it can also be an add-on for stat­ist­ics as well.

Mur­ray: Well, some­times, it can be an add-on to stat­ist­ics be­cause I think what hap­pens is, and you see a little of it in the bioin­form­at­ics field, at the be­gin­ning the ini­tial crude idea is to how you pro­cess the data. Some of the ini­tial ideas may have been sug­ges­ted by stat­ist­i­cians but some may have been dug up by these bio­chem­ists them­selves. It might be in terms of some typ­ic­ally crude scan­ning pro­ced­ure, and ask­ing does the thing look ut­terly ran­dom or can we as­so­ci­ate it with something that we re­cog­nized be­fore? I think one of the great tri­umphs of the ge­net­i­cists has been that they ana­lyze the ge­net­ics of these more ele­ment­ary or­gan­isms like fruit flies. Through time, they get to un­der­stand a reas­on­able amount of what cer­tain genes con­trol. One of their main mech­an­isms has been what one might call a ver­sion of pat­tern re­cog­ni­tion. If there’s something in the hu­man gene that looks sim­il­ar to something in the fruit fly gen­ome and if it does, might it con­trol something sim­il­ar or re­lated to it? But the trouble is, they shouldn’t press it too much. They don’t worry about hav­ing the ex­act se­quence that you see in the fruit fly gen­ome be­ing re­pro­duced in part of the hu­man gen­ome or something roughly sim­il­ar, you don’t want it too lit­er­ally, noth­ing like that. There’s been all sorts of trans­pos­i­tions in the gen­ome — God knows what — so something with sort of a rough as­so­ci­ation.

Dav­id: Mur­ray, what are you work­ing on now?

Mur­ray: A stu­dent asked me a ques­tion about Markov pro­cesses and some con­di­tions I had writ­ten on a long time ago [12]. I could an­swer some of his ques­tions, but not all of them. I tried to an­swer as well as I could but if there’s this lack of clar­ity in the lit­er­at­ure, I should write a short note on the situ­ation. A note [25] ap­peared re­cently in Stat­ist­ics and Prob­ab­il­ity Let­ters. There are still some fur­ther ques­tions on pro­cesses with al­most peri­od­ic co­v­ari­ance func­tions. These are pro­cesses that are not sta­tion­ary but you can still es­tim­ate struc­ture from one se­quence us­ing Four­i­er meth­ods. An open ques­tion is what can you do and what not.

Richard: What about in­ter­ac­tions with some oth­er well-known stat­ist­i­cians or time series people such as Han­nan. Did you vis­it him?

Mur­ray: Yes, I did vis­it him in Aus­tralia and I did have in­ter­ac­tion with him. We nev­er got to write any pa­pers to­geth­er but he was a per­son of great in­sight. I nev­er found him an easy guy to read, though.

Dav­id: You know, someone de­scribed to me of how Han­nan worked. He’d get an idea for a the­or­em. Then he’d start to try to prove it from the be­gin­ning un­til he got stuck. Then he’d start work­ing on it from the end, work­ing back­wards un­til he got stuck. Then he’d start in the middle and work out in both dir­ec­tions from there. And when all these things con­nec­ted to­geth­er, he had his proof.

Richard: It was well de­scribed — you did a good job.

Dav­id: Once, I heard that it made things clear­er when look­ing at his pa­pers.

Mur­ray: We vis­ited Aus­tralia twice. I found it very en­joy­able and stim­u­lat­ing.

Richard: Did you have much in­ter­ac­tion with Whittle?

Mur­ray: Whittle is a very tal­en­ted per­son. I think he has this per­son­al prob­ab­il­ist­ic ori­ent­a­tion. He’s not a per­son who is noted for rig­or but he has de­veloped some ex­tremely power­ful ideas via a re­mark­able in­tu­ition. I just find some of it at times in­cred­ible. Han­nan was very tal­en­ted. There are a few of his art­icles that are clearly writ­ten that I could read read­ily, but most of them I found dif­fi­cult. Maybe it’s due to what you just re­marked about.

Whittle, I think had some of these pro­found in­sights on how to get de­cent es­tim­ates for time series. They are based on what you might think of as very simple minded, but I think they are very deep. I think they are very power­ful be­cause they are so simple.

Richard: His idea, the Whittle like­li­hood, which was de­veloped in the ’50s, made this kind of renais­sance in the ’80s and ’90s, prob­ably be­cause of long memory mod­els. For like­li­hood cal­cu­la­tions, ex­act like­li­hoods are very dif­fi­cult to com­pute, but the Whittle like­li­hood of­ten gives good res­ults and is much easi­er to com­pute, even in com­plic­ated mod­els.

Mur­ray: Even a good deal of the work of people like Walk­er and Han­nan was really an at­tempt to try to rig­or­ize some of Whittle’s in­sights, don’t you think?

Richard: How about mov­ing to­wards a more con­tro­ver­sial sub­ject? The pub­lic­a­tion of Box and Jen­kins’ book gen­er­ated a great deal of in­terest in AR­IMA mod­el­ing. They de­veloped a full-ser­vice paradigm, of­ten called the Box–Jen­kins ap­proach, for car­ry­ing out mod­el iden­ti­fic­a­tion, fit­ting, pre­dic­tion, mod­el check­ing, etc. of AR­IMA mod­els. This book seemed to have made quite an im­pact, es­pe­cially in busi­ness and the so­cial sci­ences. What are your feel­ings about this?

Mur­ray: I think it did be­cause I think they gen­er­ated ef­fect­ive means for pro­grams which people could use. It was a strange feel­ing, I don’t know if it had sub­stance, but I had the feel­ing there was some kind of com­pet­it­ive as­pect between Tukey’s ori­ent­a­tion and the Box–Jen­kins’ ap­proach.

Dav­id: I think the two groups were quite com­pet­it­ive from stor­ies I heard. Now, Jen­kins was an ex­pert in spec­tral ana­lys­is, and spent time at Prin­ceton, so I guess there wasn’t com­pet­i­tion with him. For him, Box–Jen­kins provided an­oth­er way to ap­proach the non­station­ary time series case.

Mur­ray: Sure, sure.

Richard: The book was mostly time do­main.

Mur­ray: It’s mostly time do­main, that is, time do­main versus spec­tral. But also maybe part of the re­ac­tion was in the eco­nom­ics com­munity. There was a group called the Cowles Com­mis­sion, which ex­is­ted then at the Uni­versity of Chica­go, and there was a cer­tain amount of real ef­fort by these people to deal with vari­ous simple schemes, typ­ic­ally like first-or­der re­gress­ive or first-or­der mov­ing av­er­age. They pro­duced a goodly num­ber of pa­pers on some spe­cif­ic ques­tions there.

However, some­how that work didn’t give one the feel­ing of a gen­er­al ap­proach. It was only once people start­ing look­ing at the gen­er­al or­der autore­gress­ive or the gen­er­al mov­ing av­er­age and broad­er tech­niques, which I think people like Box and Jen­kins did, that there was a flower­ing ef­fect.

Dav­id: I think they had a par­tic­u­lar audi­ence in mind so they could have been more the­or­et­ic­al, but not for the audi­ence they had in mind.

Richard: They also seemed to sys­tem­at­ize this whole mod­el sys­tem, fit­ting and that sort of thing, to make it ac­cess­ible to prac­ti­tion­ers. It also seems that if you look at the his­tory of time series, and I’m not a good judge of this by any means, but spec­tral do­main meth­ods dom­in­ated for a long peri­od of time and then in the ’70s things seemed to shift more to­ward time do­main. Would you agree with that?

Mur­ray: I’m not sure about that. I think cer­tainly the time do­main the­ory be­came much more com­mon. People think more broadly about im­ple­ment­ing it. People may get frightened by spec­tral the­ory. They claim it’s heavy in some sense while autore­gress­ive — mov­ing av­er­age mod­els may seem to be sim­pler. You could im­me­di­ately write down a set of equa­tions and try to fit the data. Keh-Shin and I pub­lished a pa­per on pro­cesses with al­most peri­od­ic co­v­ari­ance func­tion with mild as­pects of non­station­ar­ity where you can still use mod­i­fied spec­tral meth­ods. We also wrote an earli­er pa­per that was pub­lished in 2002 on pro­cesses with spec­tra on lines, not ne­ces­sar­ily par­al­lel to di­ag­on­al as is the case for pro­cesses with al­most peri­od­ic co­v­ari­ance func­tion. They also sug­gest im­pli­citly that if you knew the curve or the locus of the spec­tra, you could use those spec­tral meth­ods more gen­er­ally. In the spec­tral com­munity, one of the ways of deal­ing with non­station­ar­ity was the concept of loc­al sta­tion­ar­ity. You block off the data in sec­tions and hope you can get a de­cent spec­tral es­tim­ate in each sec­tion and that the spec­tral es­tim­ates change slowly from one sec­tion to neigh­bor­ing sec­tions. And there are people who have tried to form­al­ize the concept. Priestley was one of the people push­ing this idea. Dahl­haus also has been push­ing an idea like this. Spec­tral meth­ods are still used of­ten re­l­at­ive to non­station­ary geo­phys­ic­al in­vest­ig­a­tions. That used to be one of the ways of deal­ing with earth­quake de­tec­tion and try­ing to de­term­ine the loc­a­tion of oil de­pos­its us­ing re­flectiv­ity prop­er­ties — all with spec­tral meth­ods. The geo­phys­i­cists use it day in and day out and it’s turned out to be use­ful.

Dav­id: I use the es­tim­ated spec­trum a lot for re­sid­uals. I have a mod­el and now I want to look at something and have it sug­gest which way the mod­el is in­ad­equate. Oth­er meth­ods do not do so well in my ex­per­i­ence.

Mur­ray: So you can use spec­tral meth­ods both for es­tim­a­tion or data pro­cessing gen­er­ally and, well, I pre­sume if you have suf­fi­cient data where the things are chan­ging mildly. One of the people deal­ing with re­flectiv­ity prob­lems is Papan­ic­ol­aou. One of the rel­ev­ant ques­tions is if it’s loc­ally sta­tion­ary, well, what are steps you have to take to get a reas­on­able es­tim­ate. That’s a non­trivi­al ques­tion and I think there’s a pa­per in the An­nals he’s writ­ten with Mal­lat that made ef­forts in this dir­ec­tion and had some in­ter­est­ing com­ments. Well, in the case of this pa­per of ours that Keh-Shin Lii and I have writ­ten, one of the ini­tial re­marks of the subed­it­or was that any­thing that isn’t sta­tion­ary, that is non­station­ary, you can al­ways use the no­tion of loc­al sta­tion­ar­ity and there­fore what’s the point of look­ing at these pro­cesses with al­most peri­od­ic co­v­ari­ance func­tions. Well, you ac­tu­ally have to use oth­er spec­tral tech­niques for these things since they are not loc­ally sta­tion­ary. You may have spec­tra on a num­ber of dis­tinct lines and you can’t es­tim­ate these spec­tra with loc­ally sta­tion­ary meth­ods. So, if you had star­ted out with an ini­tial as­sump­tion of uni­ver­sal ap­plic­ab­il­ity of loc­al sta­tion­ar­ity, and if you read that in­to the pa­per, you can dis­miss the pa­per. The pa­per has since ap­peared in the An­nals of Stat­ist­ics [24].

Dav­id: When you move to want to try to gen­er­al­ize these things to mul­tivari­ate cases and so on, I think the spec­tra ana­lys­is gen­er­al­izes quite dir­ectly but not the time do­main, the Box–Jen­kins ap­proach, in par­tic­u­lar. Mur­ray, you’ve done a fair amount on group the­ory, and it’s scattered through your pub­lic­a­tions. That’s an­oth­er case where the Four­i­er ap­proach ex­tends quite dir­ectly and is use­ful, but the ana­log of the time do­main ap­proach hasn’t been found yet. Were you do­ing some things in time series cases hav­ing thought of the ab­stract case and then spe­cial­iz­ing it to a time series case?

Mur­ray: There are the cent­ral lim­it the­or­ems, but also lim­it the­or­ems in a group- or a semig­roup-val­ued case [17]. Let me see in here. [Looks in a col­lec­tion of his re­prints.] That’s one of the hil­ari­ous things. The trouble is maybe one’s been around too long so you find out when you’re look­ing back you re­mem­ber the pa­per but you don’t quite re­mem­ber the de­tails so you have to start look­ing again.

Dav­id: Didn’t you have a stu­dent look at Toep­litz forms as a group?

Mur­ray: There was a very bright stu­dent that I had at Brown. You may have very bright stu­dents but they don’t do very much more ad­di­tion­al work. And then some people have writ­ten work nowhere as in­ter­est­ing as in their thes­is, but then they go ahead and do in­ter­est­ing work — more than you might ex­pect.

Richard: I was won­der­ing if we can re­turn to a top­ic that you men­tioned a few minutes ago and this is about this at­tri­bu­tion is­sue when people work on stuff in en­gin­eer­ing and they don’t know the full his­tory.

Mur­ray: The ques­tion is also what sort of cred­it do you give to people for their work. What would one say in the dens­ity es­tim­a­tion case? Cer­tainly one should refer to the pa­per of Fix and Hodges. Am I go­ing to say I have ex­clus­iv­ity? That’s non­sense. All I can say is some re­marks. Some oth­er people claim there is pri­or­ity by the Ja­pan­ese group of Akaike, who I’m sure wrote about the same time. In the his­tory of spec­tral ana­lys­is, Ein­stein had the ba­sic idea at least heur­ist­ic­ally. For the­or­ems and con­di­tions, one waited an­oth­er 30 years or so. So for some res­ults you can say there is a ca­non­ic­al res­ult, but can you say there’s a ca­non­ic­al res­ult on dens­ity es­tim­a­tion or a ca­non­ic­al res­ult on spec­tral es­tim­a­tion. I don’t think so. Can you?

Dav­id: No, when you think of dens­ity es­tim­a­tion you think of his­to­gram or es­tim­at­ing the bin width for a his­to­gram. They were do­ing that in the 16th cen­tury or so.

Mur­ray: Go ahead and smooth it so. Great in­nov­a­tion. Whatever happened, to me it was clear. Look, you can play this game with smooth­ing, with ker­nel es­tim­a­tion — you can use any es­tim­a­tion tech­nique. What’s sac­red about ker­nel es­tim­ates? That’s be­come a big busi­ness, hasn’t it, with wave­lets and this and that, so forth and so on? What did hap­pen, I have to ad­mit, I was a bit sens­it­ive. You’re work­ing in a cer­tain area and there are oth­er people work­ing in that same area and if you’re both do­ing reas­on­able work, you refer to each oth­er. But it’s clear in cer­tain areas that wasn’t be­ing done. There were some people or some groups who re­ferred to their in-group and that was it. The rest of the world didn’t ex­ist.

Dav­id: You know how there are some pa­pers people asked to ref­er­ee and they just wait 2 or 3 years and write their own pa­per. Lu­cien Le­Cam used to talk about this.

Mur­ray: That cer­tainly has happened in math­em­at­ics a long ways back, you know.

Dav­id: Laplace, no not Laplace, but Le­gendre and New­ton, I think. New­ton was say­ing that he did it earli­er but didn’t pub­lish it. Le­gendre said that meant you can claim to have done any­thing.

Mur­ray: Also there are cer­tain ideas which are in the air for a long time. No one pushes them. There’s a time when they really seem in­ter­est­ing. Con­sider the whole busi­ness with the fast Four­i­er trans­form. It at least goes back to Gauss in terms of the ba­sic in­tu­it­ive idea and even the con­crete idea. He thought it was a mildly in­ter­est­ing re­mark. I don’t think he ever pub­lished any­thing on it did he? I don’t think so.

Dav­id: In his note­book.

Mur­ray: I think it’s in his note­books. And there are people who re­gen­er­ate vari­ous as­pects of it. It waited for real in­terest in com­pu­ta­tion be­fore people began to talk about it ex­tens­ively.

Dav­id: Some people are like this about data. In some fields, the data is every­body’s, but in oth­er fields people pos­sess it and re­lease it very slowly. Seis­mo­lo­gists, when there’s an earth­quake in­volved, they are im­me­di­ately re­leas­ing the data. Also, this bio­phys­i­cist I work with once said when they are try­ing to get a mo­lecu­lar struc­ture from dif­fer­ent views of a cer­tain particle, the data are not owned by any­one so they are shar­ing it right away and that’s so won­der­ful.

Mur­ray: But that isn’t true in oth­er areas. For ex­ample, when Keh-Shin and I tried to get some re­flectiv­ity data from oil com­pan­ies, it wasn’t avail­able.

Richard: Turn­ing to an­oth­er top­ic, there are some ob­jects that carry the Rosen­blatt name. I’m not sure you were asked about it and maybe you are un­aware of the terms.

Mur­ray: What’s that?

Richard: Well there’s the Rosen­blatt trans­form­a­tion.

Mur­ray: The Rosen­blatt trans­form­a­tion? I don’t know. There’s one thing that I found very amus­ing. I think it was Granger who asked me about one of the first pa­pers I had ever writ­ten. It’s really a re­mark on an idea of Paul Lévy’s — a trans­form­a­tion on a mul­tivari­ate dis­tri­bu­tion that uni­form­ized it.

Dav­id: Yes, that’s it.

Mur­ray: I don’t know. Have they giv­en my name to that?

Richard: I have seen this a lot lately. In fact, I re­cently came across this term in a PhD dis­ser­ta­tion by an ap­plied math­em­at­ics stu­dent. I looked up the term “Rosen­blatt Trans­form­a­tion” on Google and was sur­prised to see so many hits. Sur­pris­ingly, most of the hits are from pa­pers out­side of prob­ab­il­ity and stat­ist­ics.

Dav­id: When did Lévy pro­pose it?

Mur­ray: I really don’t know. If you’re in­ter­ested I can look it up, but can’t do it just now.

Dav­id: I ask be­cause when I was work­ing with fi­du­cial prob­ab­il­ity, I found a pa­per by Irving Segal [e2] in which he had this same trans­form­a­tion. He was try­ing to find things that were pivotal quant­it­ies. It re­minded me of what Wien­er was do­ing. Wien­er was try­ing to re­duce in­teg­ra­tion and high­er di­men­sion­al spaces to do in­teg­ra­tion along the line.

Mur­ray: It might be in­ter­est­ing to try to track down all the as­so­ci­ations. But my memory is that I think I must have read some sec­tion in Paul Lévy’s fam­ous book and there was a re­mark there. I said, oh, why can’t you do it two di­men­sion­ally or mul­ti­di­men­sion­ally? So that was the gist of my re­mark and I’m sure it may have ap­peared earli­er, but the ba­sic idea of Paul Lévy goes back at least to the mid ’40s, maybe earli­er.

Dav­id: The Cooley–Tukey pa­per — you know you’re talk­ing about the work of Gauss and oth­ers. The cred­it Cooley and Tukey de­serve is re­cog­niz­ing the ap­plic­ab­il­ity and pub­li­ciz­ing it.

Mur­ray: Oh, ab­so­lutely. I really think, giv­en the in­terest I have in his­tory, I think sci­entif­ic his­tory is in­ter­est­ing, but in one way it is off kil­ter. I think that all people that are as­so­ci­ated with some sort of ori­gin­al idea or ap­plic­ab­il­ity of it, ought to be giv­en the cred­it, it would be in­ter­est­ing to trace it through time. But I think that is one thing that is wrong with math­em­at­ic­al cred­it­ing. Math­em­at­ic­al cred­it­ing looks at such and such a res­ult and of­ten they put a lat­ter-day re­search­er’s name on it and most of the time it is ab­so­lutely wrong.

Richard: This at­tri­bu­tion might have come out­side of the math­em­at­ics com­munity. It’s in­ter­est­ing that Granger had men­tioned this to you.

Dav­id: I’m not sur­prised that Granger men­tioned it be­cause he would think it is an in­ter­est­ing way to grab onto a mul­tivari­ate prob­lem.

Mur­ray: I only found out after the fact that it seemed to be of in­terest to eco­nom­ists and Granger. He said, well, we found an in­ter­est­ing art­icle of yours. Then when he told me, I thought, gee, what pa­per could he mean and then I began smil­ing — oh, it’s that pa­per [2]. Here he was re­mark­ing on an idea of Paul Lévy’s.

Richard: So you hadn’t heard this one be­fore?

Mur­ray: What, that my name was at­tached to it? No.

Richard: So you can look this up on the web — you’ll see lots of ref­er­ences to it. There is an­oth­er term that you prob­ably already know about, the Rosen­blatt pro­cess.

Mur­ray: That was, well what happened — that’s a curi­ous story. I don’t know what Taqqu’s memory of it would be, but I met Taqqu at some meet­ing. I guess he must have been a stu­dent of Man­del­brot and I don’t know if this was his thes­is. He greeted me and he es­sen­tially said there’s something wrong in a pa­per of mine. But then I thought about it and he ex­plained to me what he thought was wrong with it. I said no, it’s not wrong and I think he then looked in­to it in more de­tail. You know, that was this busi­ness of tak­ing a square of some pro­cess, a Gaus­si­an pro­cess, with a par­tic­u­lar sort of spec­trum, which I con­sidered later as an ex­ample of long-range de­pend­ence. His claim was that some com­pu­ta­tion was wrong. When I thought about it, it was clear it was not wrong. Luck­ily I was able to per­suade him be­cause when he looked through it him­self, then he really got him­self in­volved and ob­tained some very nice res­ults. He built these res­ults on func­tions of Gaus­si­an pro­cesses. He, on the one hand, and Dobrush­in and Ma­jor on the oth­er hand, did gen­er­ate this very nice set of res­ults on lim­it the­or­ems for such pro­cesses. That’s why Taqqu called the re­lated pro­cess the Rosen­blatt pro­cess.

Richard: I think he ac­tu­ally used it in the title of one of his pa­pers.

Mur­ray: I think he may have be­cause I poin­ted out that it wasn’t a mis­take. It was worth­while look­ing in­to.

Richard: He said that pa­per really in­spired him. I think it was a good ex­ample of this pa­per in the ’60s, the in­de­pend­ence — de­pend­ence pa­per, where a ref­er­ee could say this ex­ample doesn’t really con­trib­ute and is not worth­while to pur­sue, yet this ex­ample in­spired him and lots of oth­er people to look at this stuff.

Mur­ray: The in­de­pend­ence — de­pend­ence pa­per came out as one of the Berke­ley Sym­posi­um pa­pers. You know, they ask you to write a pa­per.

Richard: So, you nev­er know, this in­spired a whole field of people work­ing on this.

Mur­ray: Ac­tu­ally, that was one of the more in­ter­est­ing pa­pers I did write. It brought up cer­tain is­sues and a lot of open ques­tions.

Richard: So, are you learn­ing something Ady?

Ady: I wrote down the Rosen­blatt trans­form­a­tion and the Rosen­blatt pro­cess.

Dav­id: Do you ever see some neat res­ult and think, I could have done that?

Mur­ray: Well, in fact, I know there are ideas I worked on and got res­ults on and in­ten­ded writ­ing up and I didn’t. From that point of view, I was scooped by someone else be­cause….

Dav­id: Does that both­er you? I guess that’s what I was try­ing ask.

Mur­ray: Well, it bothered me a little bit. But it said that I did good work and….

Dav­id: You seem at peace with life, Mur­ray. Some people it may both­er.

Mur­ray: No, I think the thing that I may have re­ferred to would both­er me. If I had done work in a cer­tain area that should be cred­ited and someone else comes along and says I’ve done all the work or someone else had done the work and it’s simply not true.

Dav­id: That’s not where I was go­ing. I just know you must have count­less ideas, used up count­less pieces of pa­pers and so on do­ing things.

Dav­id and Richard: Ady and Mur­ray, thank you. This has been a treat.

#### Acknowledgment

We are ex­tremely grate­ful to Pa­tri­cia Osumi-Dav­is for her pa­tient and splen­did work in tran­scrib­ing the of­ten in­de­cipher­able au­dio tapes of this con­ver­sa­tion and re­lated works.

### Works

[1] M. Rosen­blatt: “On a class of Markov pro­cesses,” Trans. Am. Math. Soc. 71 : 1 (1951), pp. 120–​135. MR 43406 Zbl 0045.​07703 article

[2] M. Rosen­blatt: “Re­marks on a mul­tivari­ate trans­form­a­tion,” Ann. Math. Stat. 23 : 3 (1952), pp. 470–​472. MR 49525 Zbl 0047.​13104 article

[3] M. Rosen­blatt: “An in­vent­ory prob­lem,” Eco­no­met­rica 22 : 2 (April 1954), pp. 244–​247. MR 61355 Zbl 0058.​36401 article

[4] M. Rosen­blatt: “A cent­ral lim­it the­or­em and a strong mix­ing con­di­tion,” Proc. Natl. Acad. Sci. U.S.A. 42 : 1 (January 1956), pp. 43–​47. MR 74711 Zbl 0070.​13804 article

[5] M. Rosen­blatt: “Re­marks on some non­para­met­ric es­tim­ates of a dens­ity func­tion,” Ann. Math. Stat. 27 : 3 (1956), pp. 832–​837. MR 79873 Zbl 0073.​14602 article

[6] J. R. Blum and M. Rosen­blatt: “A class of sta­tion­ary pro­cesses and a cent­ral lim­it the­or­em,” Proc. Natl. Acad. Sci. U.S.A. 42 : 7 (July 1956), pp. 412–​413. A longer ver­sion of this was pub­lished in Duke Math. J. 24:1 (1957). MR 81023 Zbl 0070.​36403 article

[7] U. Gren­ander and M. Rosen­blatt: Stat­ist­ic­al ana­lys­is of sta­tion­ary time series. Wiley Pub­lic­a­tions in Math­em­at­ic­al Stat­ist­ics. Alm­qv­ist & Wiksell (Stock­holm), 1957. A 2nd, cor­rec­ted edi­tion was pub­lished in 1984, then re­pub­lished in 2008. MR 84975 Zbl 0080.​12904 book

[8] C. J. Burke and M. Rosen­blatt: “A Markovi­an func­tion of a Markov chain,” Ann. Math. Stat. 29 : 4 (1958), pp. 1112–​1122. MR 101557 Zbl 0100.​34402 article

[9] M. Rosen­blatt: “Func­tions of a Markov pro­cess that are Markovi­an,” J. Math. Mech. 8 : 4 (1959), pp. 585–​596. MR 103539 Zbl 0100.​34403 article

[10] M. Rosen­blatt: “Sta­tion­ary pro­cesses as shifts of func­tions of in­de­pend­ent ran­dom vari­ables,” J. Math. Mech. 8 : 5 (1959), pp. 665–​681. MR 114249 Zbl 0092.​33601 article

[11] C. Burke and M. Rosen­blatt: “Con­sol­id­a­tion of prob­ab­il­ity matrices,” Bull. Inst. In­ter­nat. Stat­ist. 36 : 3 (1959), pp. 7–​8. MR 120680 Zbl 0111.​15005 article

[12] M. Rosen­blatt: “Sta­tion­ary Markov chains and in­de­pend­ent ran­dom vari­ables,” J. Math. Mech. 9 : 6 (1960), pp. 945–​949. An ad­dendum to this art­icle was pub­lished in J. Math. Mech. 11:2 (1962). MR 166839 Zbl 0096.​34004 article

[13] M. Rosen­blatt: “Asymp­tot­ic dis­tri­bu­tion of ei­gen­val­ues of block Toep­litz matrices,” Bull. Am. Math. Soc. 66 : 4 (1960), pp. 320–​321. MR 124086 Zbl 0129.​31205 article

[14] M. Rosen­blatt: “Some com­ments on nar­row band-pass fil­ters,” Quart. Ap­pl. Math. 18 : 4 (1960–1961), pp. 387–​393. MR 121867 Zbl 0099.​34601 article

[15]M. Rosen­blatt: “In­de­pend­ence and de­pend­ence,” pp. 431–​443 in Pro­ceed­ings of the fourth Berke­ley sym­posi­um on math­em­at­ic­al stat­ist­ics and prob­ab­il­ity (Berke­ley, CA, 20–30 Ju­ly 1960), vol. 2. Edi­ted by J. Ney­man. Uni­versity of Cali­for­nia Press (Berke­ley, CA), 1961. MR 133863 Zbl 0105.​11802 incollection

[16] W. Freiber­ger, M. Rosen­blatt, and J. W. Van Ness: “Re­gres­sion ana­lys­is of vec­tor-val­ued ran­dom pro­cesses,” J. Soc. In­dust. Ap­pl. Math. 10 : 1 (March 1962), pp. 89–​102. MR 137266 Zbl 0111.​32902 article

[17] M. Rosen­blatt: “Asymp­tot­ic be­ha­vi­or of ei­gen­val­ues of Toep­litz forms,” J. Math. Mech. 11 : 6 (1962), pp. 941–​949. MR 150841 Zbl 0108.​31205 article

[18] M. Rosen­blatt and D. Slepi­an: “$$N$$th or­der Markov chains with every $$N$$ vari­ables in­de­pend­ent,” J. Soc. In­dust. Ap­pl. Math. 10 : 3 (September 1962), pp. 537–​549. MR 150824 Zbl 0154.​43103 article

[19] M. Rosen­blatt: “Equicon­tinu­ous Markov op­er­at­ors,” Teor. Ver­o­jat­nost. i Primenen. 9 : 2 (1964), pp. 205–​222. Rus­si­an trans­la­tion of art­icle pub­lished in The­ory Probab. Ap­pl. 9:2 (1964). MR 171318 Zbl 0133.​40101 article

[20] D. R. Brillinger and M. Rosen­blatt: “Com­pu­ta­tion and in­ter­pret­a­tion of $$k$$-th or­der spec­tra,” pp. 189–​232 in Ad­vanced sem­in­ar on spec­tral ana­lys­is of time series (Madis­on, WI, 3–5 Oc­to­ber 1966). Edi­ted by B. Har­ris. John Wiley (New York), 1967. MR 211567 Zbl 0157.​47403 incollection

[21] D. R. Brillinger and M. Rosen­blatt: “Asymp­tot­ic the­ory of es­tim­ates of $$k$$th-or­der spec­tra,” Proc. Natl. Acad. Sci. U.S.A. 57 : 2 (February 1967), pp. 206–​210. A longer ver­sion of this was pub­lished in Ad­vanced sem­in­ar on spec­tral ana­lys­is of time series (1967). MR 207021 Zbl 0146.​40805 article

[22] Stat­ist­ic­al mod­els and tur­bu­lence (La Jolla, CA, 15–21 Ju­ly 1971). Edi­ted by M. Rosen­blatt and C. Van At­ta. Lec­ture Notes in Phys­ics 12. Spring­er (Ber­lin), 1972. MR 438885 Zbl 0227.​76079 book

[23] K. S. Lii and M. Rosen­blatt: “De­con­vo­lu­tion and es­tim­a­tion of trans­fer func­tion phase and coef­fi­cients for non-Gaus­si­an lin­ear pro­cesses,” Ann. Stat. 10 : 4 (1982), pp. 1195–​1208. MR 673654 Zbl 0512.​62090 article

[24] K.-S. Lii and M. Rosen­blatt: “Es­tim­a­tion for al­most peri­od­ic pro­cesses,” Ann. Stat. 34 : 3 (2006), pp. 1115–​1139. A cor­rec­tion to this art­icle was pub­lished in Ann. Stat. 36:3 (2008). MR 2278353 Zbl 1113.​62111 article

[25] M. Rosen­blatt: “An ex­ample and trans­ition func­tion equicon­tinu­ity,” Stat­ist. Probab. Lett. 76 : 18 (December 2006), pp. 1961–​1964. MR 2329240 Zbl 1108.​60067 article