I am an EECS PhD student at MIT CSAIL, where I am working
with Professor Anna Huang. I am interested in audio,
generative modeling, and signal processing.
We create a method of capturing real acoustic spaces from 12 RIR measurements, letting us play any
audio signal in the room and listen from any location/orientation. We develop an 'audio
inverse-rendering framework' that allows us to synthesize the room's acoustics (monoaural and
binaural RIRs) at novel locations and create immersive auditory experiences (simulating music).
Humans induce subtle changes to the room's acoustic properties. We can observe these changes
(explicitly via RIR measurement, or by playing and recording music in the room) and determine a
person's location, presence, and identity.
Everyday objects possess distinct sonic characteristics determined by their shape and material.
RealImpact is the largest dataset of object impact sounds to date, with 150,000 recordings
of impact sounds from 50 objects of varying shape and material.