I am an Audio Machine Learning Scientist at Bose Corporation. My current research is centered around the development of novel ML-based methods for lightweight, on-device speech and audio signal processing, with a particular focus on speech enhancement and hearing augmentation.
Before, I was a PhD student in the Department of Bioengineering & Centre for Neurotechnology at Imperial College London (ICL). As a member of the Sensory Neuroengineering lab led by Prof. Tobias Reichenbach, my research focused on understanding neural mechanisms underlying perception and comprehension of natural speech, especially in challenging listening conditions. In my work, I combined computational modelling with neuroimaging and non-invasive brain stimulation to understand how natural speech is processed across human auditory pathways.
In addition to my PhD research, I also worked as an Applied Scientist Intern at Amazon - Lab 126, a Scientific Advisor at Logitech and a Consultant for clinical data analysis at INBRAIN Neuroelectronics.
For more details, see my CV, explore this website or get in touch!
PhD in Neurotechnology, 2022
Imperial College London, UK
MRes in Neurotechnology, 2018
Imperial College London, UK
MSc in Biomedical Engineering, 2017
Imperial College London, UK
BEng in Biomedical Engineering, 2016
Warsaw University of Technology, Poland
*-equal contribution
Also available on my Google Scholar profile.
selected
More code on my GitHub.
*(Code) Hybrid BYOL speech representation learning
(Code) A multi-lingual benchmark for speech emotion recognition
(Code) Computational model for the effect of non-invasive brain stimulation on speech in noise processing.
(Code) Python port of the NSL toolbox used for auditory modelling.
(Code) Custom set of tools for EEG processing and analysis.
(Code) Transferable pre-trained feature extractor for speech processing.
(Demo) Algorithm for recovering missing or severely degraded parts of time-frequency representations of speech.
(Code) Complex TRFs for modelling auditory brainstem responses to continuous speech from full-cap EEG
(Code) EMD-based algorithm for the extraction of F0 waveform from continuous speech. Maintained code. Original implementation by A.E. Forte.