return

Celebratio Mathematica

Murray Rosenblatt

Murray Rosenblatt
1926–2019

by Richard A. Davis

Rosen­blatt was born in New York City on Septem­ber 7, 1926, the young­er of two sons of Hy­man and Es­ter (Gold­berg) Rosen­blatt, im­mig­rants from Ukraine and Po­land, re­spect­ively. Mur­ray’s older broth­er, Dav­id, and his spouse, Joan, were also math­em­aticians/stat­ist­i­cians of some note. Both had dis­tin­guished ca­reers work­ing in the fed­er­al gov­ern­ment, with Joan spend­ing her en­tire pro­fes­sion­al ca­reer at the Na­tion­al In­sti­tute of Stand­ards and Tech­no­logy (form­ally the Na­tion­al Bur­eau of Stand­ards). Mur­ray gradu­ated from high school at the age of 16 and then pro­ceeded to study math­em­at­ics at City Col­lege of New York (CCNY) in 1942. While at CCNY, Mur­ray fo­cused on courses in math­em­at­ic­al phys­ics and ther­mo­dy­nam­ics. He was in­spired by one of his ana­lys­is pro­fess­ors, Emil Post, who en­cour­aged him to con­tin­ue study­ing math­em­at­ics. After com­plet­ing a bach­el­or of sci­ence de­gree at CCNY in 1946, he entered the gradu­ate pro­gram in math­em­at­ics at Cor­nell Uni­versity in 1946. Al­though hav­ing no pre­vi­ous ex­per­i­ence in prob­ab­il­ity and stat­ist­ics, he was ex­posed to two gi­ants in the field — Wil­li­am Feller and Marc Kac — who in turn gen­er­ated a great deal of syn­ergy in prob­ab­il­ity that rubbed off on Mur­ray. They were ex­cit­ing times at Cor­nell, which at­trac­ted a num­ber of prom­ising young vis­it­ors, in­clud­ing Joseph Doob, Mon­roe Don­sker, Don­ald Darling, and Kai Lai Chung. Mur­ray’s first courses in prob­ab­il­ity the­ory and math­em­at­ic­al stat­ist­ics were taught by Feller. In the end, he chose Kac as his Ph.D. su­per­visor be­cause he found his per­son­al­ity more com­pat­ible with his own and be­cause he al­lowed stu­dents more in­de­pend­ence in their choice of re­search top­ics.

Dur­ing his un­der­gradu­ate years, Mur­ray had a chance en­counter with what was to be his fu­ture wife, Adylin Lipson, at the Ford­ham Road Lib­rary in the Bronx. Ap­par­ently, Adylin, known as Ady, did not make a pos­it­ive first im­pres­sion on Mur­ray’s moth­er. She blamed Ady for en­ti­cing Mur­ray in­to walk­ing home from the lib­rary in the rain, whereupon he de­veloped a severe cold. Ady was also born and raised in the Bronx, re­ceived a bach­el­or’s de­gree in so­cial work from Hunter Col­lege in 1947 and would later earn a mas­ter’s de­gree from San Diego State Uni­versity in 1971. They were mar­ried in Ithaca in 1949, shortly after Mur­ray re­ceived his Ph.D., and the couple spent one year in Ithaca be­fore Mur­ray began his aca­dem­ic ca­reer at the Uni­versity of Chica­go.

Career

After en­ter­ing gradu­ate school at Cor­nell, Mur­ray’s in­terest in ana­lys­is and ther­mo­dy­nam­ics nat­ur­ally at­trac­ted him to Kac’s work on stat­ist­ic­al mech­an­ics. He chose to work with Kac be­cause he “sort of left you alone without say­ing you’ve got to do this or that so forth and so on. He let you go your own way.” This ad­vising philo­sophy would serve as Mur­ray’s man­tra in ment­or­ing his own Ph.D. stu­dents. As Mur­ray de­scribed it, his dis­ser­ta­tion, “On dis­tri­bu­tions of cer­tain Wien­er func­tion­als” was “an at­tempt to mildly gen­er­al­ize some res­ults of Kac” that were re­lated to the sem­in­al Feyn­man–Kac for­mula. The thes­is, pub­lished in the Trans­ac­tions of the Amer­ic­an Math­em­at­ics So­ci­ety un­der the title “On a class of Markov pro­cesses,” de­rived prop­er­ties of the Laplace trans­form of the dis­tri­bu­tion of cer­tain in­teg­ral func­tion­als of Browni­an mo­tion. The res­ult­ing trans­form is found as a solu­tion to a type of par­tial dif­fer­en­tial equa­tion. Fol­low­ing his de­fense, Mur­ray re­mained at Cor­nell for an­oth­er year, sup­por­ted by Kac’s Of­fice of Nav­al Re­search grant.

After fin­ish­ing his work at Cor­nell, aca­dem­ic po­s­i­tions were scarce, and Mur­ray was about to ac­cept a gov­ern­ment job when the Uni­versity of Chica­go stepped up and offered him a po­s­i­tion in their stat­ist­ics group. Mur­ray be­lieved that Kac likely played a be­hind-the-scenes role in mak­ing this con­nec­tion. Mur­ray joined the Com­mit­tee of Stat­ist­ics, the pre­curs­or to the De­part­ment of Stat­ist­ics, at Chica­go, that was then headed by W. Al­len Wal­lis and in­cluded a num­ber of bud­ding young re­search­ers that in­cluded Jim­mie Sav­age, Leo Good­man, Raghu Raj Ba­hadur, Wil­li­am Kruskal, and Charles Stein. It was at Chica­go in 1951–52, when Mur­ray first met Ulf Gren­ander, a young Swedish schol­ar. They would strike up a short but pro­duct­ive col­lab­or­a­tion that pro­duced a num­ber of highly in­flu­en­tial time series pa­pers deal­ing primar­ily with es­tim­a­tion in the spec­tral do­main. One of their few time do­main pa­pers con­sidered es­tim­a­tion of a lin­ear re­gres­sion mod­el with time series er­rors. They provided con­di­tions on the re­gressors and the spec­trum of the er­ror pro­cess in or­der for or­din­ary least squares (OLS) es­tim­ates to be asymp­tot­ic­ally ef­fi­cient. This is one of the fun­da­ment­al res­ults in times series ana­lys­is. Much of their col­lab­or­at­ive work ap­peared in their much-ac­claimed 1957 book Stat­ist­ic­al Ana­lys­is of Time Series. This book be­came a must-read for both prac­ti­tion­ers and re­search­ers in time series for dec­ades. With the large num­ber of bright young re­search­ers and an act­ive vis­it­ors pro­gram, the en­vir­on­ment at Chica­go was nearly ideal for someone like Mur­ray. In ad­di­tion to writ­ing a couple of pa­pers with vis­it­ing schol­ar Joe Hodges, he was also in­spired by Ba­hadur. Mur­ray cred­its his dis­cus­sions with Ba­hadur at the be­gin­ning of his “little pa­per” on non­para­met­ric dens­ity es­tim­a­tion [3], which ap­peared in the An­nals of Math­em­at­ic­al Stat­ist­ics in 1956. Of course this “little pa­per,” which cur­rently has around 5,000 cita­tions, was the first to in­tro­duce ker­nel dens­ity es­tim­a­tion. This has now be­come one of the stand­ard non­para­met­ric es­tim­a­tion tech­niques and gen­er­ated a rich and im­port­ant line of re­search activ­ity.

In 1956, Mur­ray left Chica­go for the math­em­at­ics de­part­ment at In­di­ana Uni­versity, where he struck up a col­lab­or­a­tion with Ju­li­us Blum. The stint at In­di­ana las­ted only a few years be­fore he landed at Brown Uni­versity in 1959. Re­turn­ing to the East Coast had many at­trac­tions, in­clud­ing the op­por­tun­ity to meet reg­u­larly with re­search­ers at nearby Bell Labor­at­or­ies. There he in­ter­ac­ted with Dav­id Slepi­an and oth­ers who triggered his in­terest in mod­el­ing phys­ic­al sci­ence phe­nom­ena. Mo­tiv­ated by a prob­lem in hu­man vis­ion, Mur­ray and Slepi­an wrote a pa­per on a nth or­der Markov pro­cess in which \( n \) vari­ables are in­de­pend­ent but \( (n+1) \) vari­ables are de­pend­ent [5]. While vis­it­ing Bell Labs, Mur­ray met one of John Tukey’s Ph.D. stu­dents, Dav­id Brillinger, who was work­ing on high­er-or­der prop­er­ties of times series mod­els. This in­ter­ac­tion de­veloped in­to a series of col­lab­or­at­ive pa­pers on high­er-or­der spec­tra.

Dur­ing his Brown years, Mur­ray had an op­por­tun­ity to vis­it the Stat­ist­ics De­part­ment at Columbia Uni­versity. It was there that he wrote per­haps his most well-known pa­per, “A cent­ral lim­it the­or­em and a strong mix­ing con­di­tion” [2], which in­tro­duced the strong mix­ing con­di­tion. This work was mo­tiv­ated in part by an idea of Serge Bern­stein that broke up se­quences of ran­dom vari­ables in­to nearly in­de­pend­ent blocks of ran­dom vari­ables. The con­di­tion on the blocks, which spe­cifies that they be­come roughly in­de­pend­ent as the blocks grow in size and the sep­ar­a­tion in­creases, is in es­sence the strong mix­ing con­di­tion.

Mur­ray left Brown for the newly cre­ated Uni­versity of Cali­for­nia at San Diego (UC­SD) in 1964. Scripps In­sti­tu­tion of Ocean­o­graphy pred­ated the uni­versity, and its strong re­search group that was ana­lyz­ing a mul­ti­tude of time series, not to men­tion the San Diego weath­er, was a ma­jor at­trac­tion. Mur­ray ar­rived in 1964, one year be­fore the first un­der­gradu­ates were ad­mit­ted. These were heady times to be part of the new fac­ulty that in a short time would turn UC­SD in­to a world-class re­search uni­versity. UC­SD was for­tu­nate to have at­trac­ted a num­ber of tal­en­ted young prob­ab­il­ists and con­tin­ues to this day to have a ma­jor pres­ence in prob­ab­il­ity the­ory.

Technical contributions

Strong mixing and the central limit theorem
In his most cel­eb­rated pa­per, Rosen­blatt in­tro­duced the no­tion of strong mix­ing for sta­tion­ary time series. If \( \{X_t\} \) is a sta­tion­ary time series, then strong mix­ing spe­cifies the rate at which events defined by two sets of ran­dom vari­ables sep­ar­ated by \( m \)-time lags be­come in­de­pend­ent. So, for ex­ample, if \[ \mathcal{B}_0 = \sigma(X_s, s \leq 0) \quad \text{and} \quad \mathcal{F}_n = \sigma(X_s, s\geq n) \] are \( \sigma \)-fields gen­er­ated by the past re­l­at­ive to time 0 and the fu­ture of time \( n \), then the times series is said to be strong mix­ing if \[ \sup_{A\in \mathcal{B}_0,B\in \mathcal{F}_n} |P(A\cup B)- P(A)P(B)| \leq \alpha(n), \] where the mix­ing func­tion \( \alpha (\,\cdot\,) \) de­creases to zero as \( n \to\infty \). Al­though oth­er mix­ing con­di­tions ex­is­ted around that same time, in­clud­ing \( m \)-de­pend­ence in which ran­dom vari­ables sep­ar­ated by more than \( m \)-time lags are in­de­pend­ent, Rosen­blatt, in his un­der­stated man­ner, wrote, “The strong mix­ing con­di­tion used in this pa­per seems to be a more in­tu­it­ively ap­peal­ing form­al­iz­a­tion of this no­tion than most oth­ers.” Of course he was right, as the strong mix­ing con­di­tion be­came one of the most com­monly used con­di­tions to es­tab­lish a cent­ral lim­it the­or­em (CLT). Un­der a mean zero and fi­nite \( (2 + \delta) \)-mo­ment con­di­tion on \( X_t \), a growth re­stric­tion on the vari­ance of the par­tial sums, \( S_n = \sum^n_{t=1}X_t \), and a con­ver­gence rate on the mix­ing func­tion to 0, Rosen­blatt showed that \( S_n/\sqrt{n} \) was asymp­tot­ic­ally nor­mally dis­trib­uted, al­though his res­ults were more gen­er­al than stated here. The idea of the proof was based on Bern­stein’s big-block small-block con­struc­tion. Es­sen­tially, the se­quence of ran­dom vari­ables \( X_1,\dots,X_n \) is sep­ar­ated in \( k_n \) blocks of size \( p_n + q_n \), where \( p_n \) and \( q_n \) are the sizes of the big and small blocks, re­spect­ively with \( n=k_n(p_n+q_n) \). Now the big blocks of ran­dom vari­ables are sep­ar­ated by \( q_n \) lags, and hence these blocks of ran­dom vari­ables be­come in­de­pend­ent via the strong mix­ing con­di­tion. The sum over the small blocks be­come neg­li­gible in the lim­it, and hence the sum \( S_n \) can be viewed as roughly a sum of \( k_n \) in­de­pend­ent ran­dom vari­ables that are asymp­tot­ic­ally nor­mal by the stand­ard CLT.
Kernel density estimation

Rosen­blatt’s 1956 sem­in­al pa­per on non­para­met­ric dens­ity es­tim­a­tion in­tro­duces the ker­nel dens­ity es­tim­ate [3]. The ob­ject­ive of this pa­per was to con­struct an es­tim­ate of the prob­ab­il­ity dens­ity func­tion (pdf) \( f(x) \) from a sample of in­de­pend­ent and identic­ally dis­trib­uted ran­dom vari­ables \( X_1,\dots, X_n \) hav­ing dens­ity \( f \). The mo­tiv­a­tion for the ker­nel es­tim­ate comes from tak­ing the dif­fer­ence of the em­pir­ic­al dis­tri­bu­tion func­tion \[ F_n (x) = \frac{\# \{i\leq n:X_i \leq x\}}{n}. \] So an ob­vi­ous es­tim­ate of the pdf is giv­en by the dif­fer­ence \[ f_n(y) = \frac{F_n(y+h) - F_n (y-h)}{2h}, \] where \( h=h_n \) is the band­width that is a func­tion of the sample size \( n \) that con­verges to 0 with in­creas­ing \( n \). Us­ing well-known prop­er­ties of \( F_n \), it is straight­for­ward to de­rive the cor­res­pond­ing stat­ist­ic­al prop­er­ties of \( f_n \). As­sum­ing the ex­ist­ence of three de­riv­at­ives of \( f \), he showed that the op­tim­al choice of the band­width that min­im­izes the asymp­tot­ic in­teg­rated mean squared er­ror, \[ \int^{\infty}_{-\infty} E|f_n(y)-f(y)|^2 \, dy, \] is \[ h_n=kn^{-\tfrac{1}{5}}, \quad\text{where}\quad k = \biggl[\frac{9}{2\int^{\infty}_{-\infty}|f^{\prime\prime}(y)|^2\, dy}\biggr]^{\tfrac{1}{5}}. \] This na­ive es­tim­ate is an ex­ample of a ker­nel dens­ity es­tim­ate in which the ker­nel \( w(x)=\tfrac{1}{2} \), for \( |x|\leq 1 \), and 0, oth­er­wise. More gen­er­ally, this pa­per con­sidered ker­nel dens­ity es­tim­ates giv­en by \[ f_n(y)=\int^{\infty}_{-\infty}w_n(y-u)\,dF_n(u) =\frac{1}{n}\sum^{n}_{t=1}w_n(y-X_t), \] where \( w_n(u)=\frac{1}{h}w\bigl(\frac{u}{h}\bigr) \), and \( w\geq 0 \) sat­is­fies \[ \int^{\infty}_{-\infty}w(u)\,du = 1,\quad \int^{\infty}_{-\infty} uw(u)\,du = 0, \quad \text{and}\quad \int^{\infty}_{-\infty} |u|^3 w(u)\,du < \infty. \] The mean-squared er­ror of these es­tim­ates is of or­der \( 0 (n^{-4/5}) \). This ba­sic idea was ex­ten­ded by many oth­ers to ad­di­tion­al sorts of prob­lems, in­clud­ing non­para­met­ric re­gres­sion es­tim­ates. A great deal of ef­fort went in­to design­ing op­tim­al ker­nel func­tions. Rosen­blatt con­tin­ued his re­search on non­para­met­ric dens­ity es­tim­a­tion throughout much of his re­search life, and one of his most noted pa­pers is with Bick­el on glob­al meas­ures of dens­ity es­tim­ates, which ap­peared in the An­nals of Stat­ist­ics in 1973 [6]. The idea was to con­sider the ker­nel dens­ity es­tim­ate (a) for con­struct­ing uni­form con­fid­ence bands for \( f \) of the form \( f_n \pm c_n \sqrt{f_n} \) and (b) for con­struct­ing good­ness of fit tests for the null hy­po­thes­is \( H:f=f_0 \) us­ing the test stat­ist­ic \[ \max_x\frac{| f_n(x) - f_0(x)|}{\sqrt{f_0(x)}}. \] The lim­it the­ory was based on the heur­ist­ic that the pro­cess \[ U_n(x):=\sqrt{nh_n}\,\frac{ f_n(x)-f(x)}{\sqrt{f(x)}}, \] which was asymp­tot­ic­ally in­de­pend­ent for dis­tinct and fixed val­ues of \( x \), be­haved loc­ally like a sta­tion­ary Gaus­si­an pro­cess. The res­ult­ing test stat­ist­ic could then be de­scribed as a max­ima of a Gaus­si­an pro­cess over an in­creas­ing in­ter­val on the real line. The res­ults were ahead of their time, as the Bick­el and Rosen­blatt pa­per also made de­tailed use of the lim­it­ing dis­tri­bu­tion (Gum­bel) of the max­ima of a Gaus­si­an pro­cess un­der dif­fer­ent smooth­ness con­di­tions.

Limit theory for long memory processes

In his 1960 Berke­ley Sym­posi­um pa­per, “In­de­pend­ence and De­pend­ence,” Rosen­blatt con­sidered lim­it the­ory for par­tial sums of sta­tion­ary se­quences ex­hib­it­ing long memory. In such a set­ting, the CLT he had es­tab­lished for strongly mix­ing se­quences no longer ap­plied. In this pa­per, he con­siders a func­tion of a sta­tion­ary Gaus­si­an pro­cess \( \{Z_t\} \) with mean 0 and auto­co­v­ari­ance func­tion \[ r(h) = (1+h^2)^{-\tfrac{\gamma}{2}} \] with \( 0 < \gamma < \frac12 \). He chose the simplest non­lin­ear func­tion of \( \{Z_t\} \) (lin­ear func­tions would again be Gaus­si­an and not of in­terest) giv­en by \( X_t=Z^2_t-1 \), which again is a mean zero sta­tion­ary time series. But this pro­cess is not strong mix­ing. The spec­tral dens­ity func­tion of the \( Z_t \) has a sin­gu­larl­ity of the form \( |\omega|^{\gamma-1} \) in a neigh­bor­hood of the ori­gin. Rosen­blatt then es­tab­lished the lim­it dis­tri­bu­tion of the nor­mal­ized par­tial sums \( n^{\gamma-1}\sum^n_{t=1}X_t \). Giv­en the long-memory of the pro­cess, the vari­ance of the par­tial sums grows at a much slower rate and hence the nor­mal­iz­a­tion is much smal­ler, \( n^{\gamma-1} \) com­pared to the usu­al \( n^{1/2} \). In ad­di­tion the lim­it dis­tri­bu­tion is non­nor­mal. This ex­ample has gen­er­ated a great deal of in­terest, and it was picked up by oth­ers, spe­cific­ally M. S. Taqqu [e2] and R. L. Dobrush­in and P. Ma­jor [e3]. They ex­ten­ded this no­tion to a wide range of pro­cesses and were able to clas­si­fy func­tion­al lim­its of the par­tial sum pro­cess.

Bispectrum

Much of the early work in time series had a strong fre­quency do­main fla­vor, with fo­cus on the second-or­der spec­trum. It was found that second-or­der mod­els were not suf­fi­ciently rich to mod­el a range of phys­ic­al phe­nom­ena. In the late 1950s and early 1960s, A. Kolmogorov urged the young schol­ars V. P. Le­onov and A. N. Shiry­aev to con­sider high­er-or­der mod­els with a view to­wards de­scrib­ing non­lin­ear be­ha­vi­or. Such mod­els re­quire ana­lys­is that goes bey­ond the stand­ard spec­tral ana­lys­is, which only dealt with second-or­der prop­er­ties (co­v­ari­ance func­tion) of the pro­cess. In the mid-1960s, Rosen­blatt began to work in a sim­il­ar vein on high­er-or­der spec­tra. Around this time, a prom­ising Prin­ceton Ph.D. stu­dent, Dav­id Brillinger, who was work­ing at Bell Labs, was in­spired by Rosen­blatt’s work and an im­port­ant col­lab­or­a­tion was formed. In a series of in­flu­en­tial pa­pers in 1967, Brillinger and Rosen­blatt de­veloped the asymp­tot­ic the­ory for es­tim­ates of the cu­mu­lant spec­trum un­der mix­ing con­di­tions. This re­search set the stage for iden­ti­fic­a­tion of non­lin­ear time series mod­els. Rosen­blatt con­tin­ued work­ing on high­er-or­der spec­tral the­ory throughout his pro­fes­sion­al ca­reer. These ideas played a key role in mod­el­ing non­min­im­um phase and nonGaus­si­an lin­ear pro­cesses. These ideas also played a role in his books Sta­tion­ary Se­quences and Ran­dom Fields and Gaus­si­an and Non-Gaus­si­an Lin­ear Time Series and Ran­dom Fields.

Other major contributions

Rosen­blatt made ma­jor con­tri­bu­tions in a num­ber of oth­er areas as well. Much of this work seemed to be mo­tiv­ated by a con­jec­ture by N. Wien­er [e1] that gets at the heart of gen­er­al time series mod­el­ing. The setup is the fol­low­ing. If \( \{X_t\} \) is a sta­tion­ary pro­cess with \( \mathcal{B}_n = \sigma(X_j, j\leq n) \) then when does the pro­cess have a one-sided caus­al rep­res­ent­a­tion, i.e., when does \( X_n \) have the rep­res­ent­a­tion, \[ X_n = f (\xi_n,\xi_{n-1},\dots), \] for some iid se­quence \( \{\xi_t\} \). Wien­er con­jec­tured that a ne­ces­sary and suf­fi­cient con­di­tion for such a rep­res­ent­a­tion is that the back­ward tail \( \sigma \)-field be trivi­al, \( \mathcal{B}_{-\infty} = \bigcap_n\mathcal{B}_n=\{\psi,\Omega\} \). Rosen­blatt [4] showed this was true for Markov chains with a count­able state space. Rosen­blatt [8], [7] dis­cussed when this con­jec­ture holds and fails. Aside from the full scope of when Wien­er’s con­jec­ture holds, one can see traces of this no­tion in many of Rosen­blatt’s re­search areas. This in­cludes his in­terest on lim­its of con­vo­lu­tions of prob­ab­il­ity meas­ures on com­pact semig­roups and in the de­con­vo­lu­tion prob­lem. The ba­sic set­ting of the de­con­vo­lu­tion prob­lem is that the ob­ser­va­tions \( X_t \) come from a lin­ear sys­tem driv­en by iid noise, i.e., \[ X_t = \sum^{\infty}_{j=-\infty}\psi_j\xi_{t-j}, \] where the \( \{\psi_j\} \) rep­res­ents a lin­ear fil­ter and \( \{\xi_t\} \) is iid noise. Then the ques­tion is, when can one re­cov­er the noise se­quence? If the fil­ter is known, this is re­l­at­ively straight­for­ward. But if the fil­ter is un­known, then the goal is to de­con­volve the fil­ter from the noise. In a case in which the noise is Gaus­si­an, then there is an iden­ti­fic­a­tion prob­lem. Rosen­blatt stud­ied this prob­lem in the nonGaus­si­an set­ting us­ing high­er-or­der spec­tra and like­li­hood meth­ods when the dis­tri­bu­tion of the noise is known. It is of in­terest to know when the fil­ter is min­im­um phase, which cor­res­ponds to the case that the rep­res­ent­a­tion is one sided, \( X_t=\sum^{\infty}_{j=0}\psi_j\xi_{t-j} \), de­pend­ing only on the past of \( \xi_s \), \( s\leq t \). So Wien­er’s con­jec­ture comes full circle to mod­ern time-series mod­el­ing.

Fi­nally, there is the three-page Rosen­blatt pa­per, “Re­marks on a mul­tivari­ate trans­form­a­tion” in the An­nals of Math­em­at­ic­al Stat­ist­ics [1]. The rather straight­for­ward trans­form­a­tion de­scribed in this pa­per maps a ran­dom vec­tor \( X=(X_1,\dots,X_k) \) with an ab­so­lutely con­tinu­ous dis­tri­bu­tion in­to a ran­dom vec­tor in which the com­pon­ents are iid uni­form on \( (0,1) \). This is the mul­tivari­ate ana­logue of the stand­ard one-di­men­sion­al prob­ab­il­ity in­teg­ral trans­form. Al­though this is not a deep res­ult, it has proven ex­tremely handy in mod­ern stat­ist­ics, es­pe­cially in the con­text of cop­ula mod­el­ing and good­ness of fit of mul­tivari­ate dis­tri­bu­tions. This pa­per has nearly 3,000 cita­tions.

After Mur­ray be­came Dis­tin­guished Pro­fess­or Emer­it­us at UC­SD in 1994, he re­mained act­ively en­gaged in re­search un­til the last few years of his life. After a cour­ageous fight with can­cer, Ady passed away in 2009. The Rosen­blatts had two chil­dren, daugh­ter Karin (cur­rently of Cham­paign, Illinois) and son Daniel (of Live Oak, Texas). Mur­ray ad­vised twenty-two dis­ser­ta­tions, and to many of these Ph.D. stu­dents, he and Ady served as sur­rog­ate par­ents dur­ing their gradu­ate school years. He was there for his stu­dents with great en­cour­age­ment and sup­port. In cel­eb­ra­tion of Mur­ray’s 90th birth­day, the Mur­ray and Adylin Rosen­blatt En­dowed Lec­ture Series in Ap­plied Math­em­at­ics was cre­ated at UC­SD in 2016.1

Concluding remarks

Mur­ray Rosen­blatt had amaz­ing in­tu­ition and in­sight about ran­dom phe­nom­ena — it was a sixth sense about how things should work. He was a schol­ar who seemed to nev­er have for­got­ten a his­tor­ic­al fact or the­or­em. To his Ph.D. stu­dents, he ap­peared to be the most know­ledge­able per­son, on top­ics from geo­graphy to his­tory to the phys­ic­al sci­ences and math­em­at­ics, that they ever en­countered. While Mur­ray had a sooth­ing and re­lax­ing style of lec­tur­ing, he could be lac­on­ic in both his writ­ings and verbal com­mu­nic­a­tion. The lat­ter could be es­pe­cially per­plex­ing to new re­search­ers try­ing to de­code his thoughts and in­sights on vari­ous math­em­at­ics prob­lems. It was not un­com­mon for a former Ph.D. stu­dent or young col­league to only fully grasp what Rosen­blatt had clearly un­der­stood and tried to con­vey years earli­er: “Oh, that is what Mur­ray meant” was a com­mon thought. Mur­ray’s ad­vising ap­proach rep­lic­ated the one he ex­per­i­enced with Kac. He would let his stu­dents work on what they wanted and left them to their own devices. But he offered clear guid­ance, es­pe­cially when stu­dents were strug­gling to find their way. He made all of his stu­dents feel im­port­ant re­gard­less of their aptitude and provided be­hind the scenes sup­port in job searches and oth­er en­deavors to help fur­ther their ca­reers.

Later in life Rosen­blatt be­came fas­cin­ated with Chinese silk and be­came a quasi-ex­pert, an ex­pres­sion he liked to use, on all things re­lated to the man­u­fac­tur­ing and pro­duc­tion of the fab­ric. He loved go­ing on long walks and hik­ing in the moun­tains whenev­er he had the chance. Even at an ad­vanced age, it was dif­fi­cult for his young­er com­pan­ions to keep up with his pace. In later years, Mur­ray and Ady very much en­joyed spend­ing hol­i­days in Hawaii with his broth­er Dav­id and sis­ter-in-law Joan.

Sti­gler’s law of eponymy states that no sci­entif­ic dis­cov­ery is named after its ori­gin­al dis­cover­er. Mur­ray would oc­ca­sion­ally echo sen­ti­ments sim­il­ar to this law to his Ph.D. stu­dents, who al­though they were en­thu­si­ast­ic, they were na­ive and did not know bet­ter [e4]. So it is some­what iron­ic that a num­ber of terms in prob­ab­il­ity and stat­ist­ics now bear the Rosen­blatt name. This in­cludes the Rosen­blatt trans­form­a­tion and the Rosen­blatt pro­cess, and some­times strong mix­ing is re­ferred to as Rosen­blatt mix­ing. Mur­ray also ten­ded to es­chew la­bels. Was he a prob­ab­il­ist, a stat­ist­i­cian, a math­em­atician? He spent his en­tire aca­dem­ic life in a de­part­ment of math­em­at­ics as op­posed to a stat­ist­ics or en­gin­eer­ing de­part­ment. His af­fin­ity to­wards math­em­at­ics was nev­er more evid­ent than dur­ing one of our last con­ver­sa­tions when I asked, “What are you think­ing about these days?” Mur­ray re­spon­ded, “I am think­ing about the found­a­tions of math­em­at­ics.” He then poin­ted me to the book on his desk, Prin­ceton Com­pan­ion to Math­em­at­ics, a heavy read that he was cur­rently work­ing through.

Ques­tion answered — al­ways a math­em­atician.

Major honors

1955 Fel­low, In­sti­tute of Math­em­at­ic­al Stat­ist­ics
1965–66 Gug­gen­heim Fel­low
1970 Wald Lec­tures
1971–72 Gug­gen­heim Fel­low
1975 Fel­low of the Amer­ic­an As­so­ci­ation for the Ad­vance­ment of Sci­ence
1979 Over­seas Fel­low, Churchill Col­lege, Cam­bridge Uni­versity
1984 Elec­ted to the Na­tion­al Academy of Sci­ences
2009 Fel­low of the So­ci­ety for In­dus­tri­al and Ap­plied Math­em­at­ics
2014 Fel­low of the Amer­ic­an Math­em­at­ic­al So­ci­ety

Works

[1] M. Rosen­blatt: “Re­marks on a mul­tivari­ate trans­form­a­tion,” Ann. Math. Stat. 23 : 3 (1952), pp. 470–​472. MR 49525 Zbl 0047.​13104 article

[2] M. Rosen­blatt: “A cent­ral lim­it the­or­em and a strong mix­ing con­di­tion,” Proc. Natl. Acad. Sci. U.S.A. 42 : 1 (January 1956), pp. 43–​47. MR 74711 Zbl 0070.​13804 article

[3] M. Rosen­blatt: “Re­marks on some non­para­met­ric es­tim­ates of a dens­ity func­tion,” Ann. Math. Stat. 27 : 3 (1956), pp. 832–​837. MR 79873 Zbl 0073.​14602 article

[4] M. Rosen­blatt: “Sta­tion­ary Markov chains and in­de­pend­ent ran­dom vari­ables,” J. Math. Mech. 9 : 6 (1960), pp. 945–​949. An ad­dendum to this art­icle was pub­lished in J. Math. Mech. 11:2 (1962). MR 166839 Zbl 0096.​34004 article

[5] M. Rosen­blatt and D. Slepi­an: “\( N \)th or­der Markov chains with every \( N \) vari­ables in­de­pend­ent,” J. Soc. In­dust. Ap­pl. Math. 10 : 3 (September 1962), pp. 537–​549. MR 150824 Zbl 0154.​43103 article

[6] P. J. Bick­el and M. Rosen­blatt: “On some glob­al meas­ures of the de­vi­ations of dens­ity func­tion es­tim­ates,” Ann. Stat. 1 : 6 (1973), pp. 1071–​1095. Cor­rec­tions to this art­icle were pub­lished in Ann. Stat. 3:6 (1975). MR 348906 Zbl 0275.​62033 article

[7] M. Rosen­blatt: “A com­ment on a con­jec­ture of N. Wien­er,” Stat­ist. Probab. Lett. 79 : 3 (February 2009), pp. 347–​348. MR 2493017 Zbl 1165.​60016 article

[8] M. Rosen­blatt: “Sta­tion­ary pro­cesses and a one-sided rep­res­ent­a­tion in terms of in­de­pend­ent identic­ally dis­trib­uted ran­dom vari­ables,” pp. 311–​315 in De­pend­ence in prob­ab­il­ity, ana­lys­is and num­ber the­ory (Graz, Aus­tria, 17–20 June 2009). Edi­ted by I. Berkes, R. C. Brad­ley, H. Dehling, M. Pe­li­grad, and R. Tichy. Kendrick Press (Heber City, UT), 2010. Volume in hon­or of Wal­ter Phil­ipp. This pa­per was ded­ic­ated to Phil­ipp and the au­thor’s de­ceased wife. MR 2731063 Zbl 1213.​60070 incollection