I am currently a Research Scientist at Amazon Alexa AI on the acoustic event detection team. I completed my PhD in Electrical & Computer Engineering at Johns Hopkins University in the Center for Language and Speech Processing, co-advised by Sanjeev Khudanpur and Shinji Watanabe, having attended Carnegie Mellon University for my undergraduate degree prior to that. My areas of expertise are in deep learning, statistical modeling, and signal processing. In general, I like to work on waveform-level processing of acoustic signals.

In particular, I am interested in working with ambient acoustic environments, featuring long-term recordings picking up multiple far-field soruces. Most recently I have been working on the detection of events within such recordings. However, my PhD dissertation focused on single-channel speech separation, the task of producing a single audio waveform for each speaker in a recording where multiple people are speaking simultaneously. However, I am also interested in speaker diarization and speaker identification.