from brain cancer. I asked Misha who worked with him to write about it.
Partha Niyogi, Louis Block Professor in Computer Science and Statistics at the University of Chicago passed away on October 1, 2010, aged 43.
I first met Partha Niyogi almost exactly ten years ago when I was a graduate student in math and he had just started as a faculty in Computer Science and Statistics at the University of Chicago. Strangely, we first talked at length due to a somewhat convoluted mathematical argument in a paper on pattern recognition. I asked him some questions about the paper, and, even though the topic was new to him, he had put serious thought into it and we started regular meetings. We made significant progress and developed a line of research stemming initially just from trying to understand that one paper and to simplify one derivation. I think this was typical of Partha, showing both his intellectual curiosity and his intuition for the serendipitous; having a sense and focus for inquiries worth pursuing, no matter how remote or challenging, and bringing his unique vision to new areas. We had been working together continuously from that first meeting until he became too sick to continue. Partha had been a great adviser and a close friend for me; I am very much thankful to him for his guidance, intellectual inspiration and friendship.
Partha had a broad range of interests in research centered around the problem of learning, which had been his interest since he was an undergraduate at the Indian Institute of Technology. His research had three general themes: geometric methods in machine learning, particularly manifold methods; language evolution and language learning (he recently published a 500-page monograph on it) and speech analysis and recognition. I will not talk about his individual works, a more in-depth summary of his research is in the University of Chicago Computer Science department obituary. It is enough to say that his work has been quite influential and widely followed up. In every one of these areas he had his own approach, distinct, clear, and not afraid to challenge unexamined conventional wisdom. To lose this intellectually rigorous but open-minded vision is not just a blow to those of us who knew him and worked with him, but to the field of machine learning itself.
I owe a lot to Partha; to his insight and thoughtful attitude to research and every aspect of life. It had been a great privilege to be Partha’s student, collaborator and friend; his passing away leaves deep sadness and emptiness. It is hard to believe Partha is no longer with us, but his friendship and what I learned from him will stay with me for the rest of my life.
Partha’s approach to research made a deep impression on me when we were both at Bell Labs. I think it’d make a great case study for the high-risk, high-payoff strategy for research.
Partha started by matching what he knew of phonetics and what he knew about the favored approach of the engineers, HMMs. He observed that there was no way the three-state forward/loop models of triphones in context could ever capture timing information, like the onset time of voicing, which is all that distinguishes English /d/ versus /t/. He explicitly modeled length, showed some promising initial results, and was pretty much dismissed as a crackpot by the electrical engineers for not using HMMs. Even then, it was pretty easy to see that Partha was not only right in his analyses, but had interesting hypotheses about how to solve the outstanding problems.
Earlier this year at U. Chicago, Partha excitedly showed me the current state of his speech recognition work. He’d taken it well beyond where I’d last seen it at Bell Labs (I’m not in speech rec, so wasn’t keeping up with the field). In his recent implementations, it was possible to do completely feature-based speech recognition in real time.
I was hoping to meet again I was back in Chicago a couple weeks ago. I wanted to follow up on a discussion we were having about his work on language evolution. As with speech recognition, Partha was focusing on deep, hard, and interesting problems at the crux of the whole issue of language evolution: how do phonetic and lexical (or grammatical, etc.) systems evolve to balance parsimony and interpretability.
Sadly, Partha had died the previous week. We’ll all miss his insight, generosity, and especially his brave approach to the really hard problems.
I cannot forget his Laplacian Eigenmaps that i started not so long time ago. I didn’t know him at all in person but…
Rest in peace, Partha.