I am a PhD candidate at TU Delft on the topic of conversational search under the supervision of Claudia Hauff. Before joining the PhD, I worked with machine learning and data science projects for 3 years at Hekima.
My research is currently focused on neural rankers for information-seeking conversations. We created a novel dialogue corpus of information-seeking conversations which has a few properties previous datasets lack: MANtIS. We studied how neural rankers for dialogue perform on unseen domains, and how to improve then in the domain adaptation setup. We also showed that by intelligently sorting the training batches we get more effective neural rankers: curriculum learning for IR. In our latest work we reflect on the challenges of current offline evaluation schemes for conversational search tasks.
I am currently developing a library that facilitates experiments with pre-trained transformers, e.g. BERT, for ranking: transformer-rankers. It can be used to quickly train and evaluate a transformer-based model for different ranking tasks, such as passage retrieval, adhoc retrieval, conversation response ranking, etc.
|Jul 28, 2020||Had 2 papers accepted at RecSys’20: (1) on probing LMs for conversational recommendation and (2) on enhancing recsys with performance estimates.|
|Jul 10, 2020||I am open sourcing a library to quickly conduct experiments with pre-trained transformers for differen ranking tasks: transformer-rankers.|
|Jul 7, 2020||Our paper on challenges of the evaluation of conversational search systems was accepted at the KDD Converse’20 workshop.|
|Jan 26, 2020||Our paper on domain adaptation for conversation response ranking was accepted at the CAIR’20 workshop.|
|Dec 20, 2019||The preprint of our ECIR’20 paper on curriculum learning for the conversation response ranking task is up.|