Machine Learning Research Scientist
NEC Labs Europe
I am a research scientist in the Machine Learning group of NEC Labs Europe. I work on natural language processing (NLP), graph structured data and making machine-learnt models more interpretable.
From 2014-2018 I was graduate research assistant and PhD student at the Statistical NLP group at the Heidelberg University, supervised by Prof. Dr. Stefan Riezler. During my PhD I worked on Natural Language Processing (NLP) problems, such as machine translation and question-answering, and I explored how reinforcement learning can be applied to these problems.
- Machine Learning, particularly for Graph-structured Data and using Reinforcement Learning
- Explainable AI: AI should enrich our lives and for this we should work on interpretable, trustworthy and inclusive AI
- Natural Language Processing, particularly Question-Answering and Dialogue
Recently, colleagues and I have developed a method to explain the prediction of neural matrix factorization models, which are used for knowledge base completion and recommender systems. You can check out a related talk here:
- April 8: Talk at Python for ML and AI Summit
- February 9: GCLR Workshop Presentation of Gradient Rollback
- February 4 & 5: AAAI 21 Presentation of our paper Explaining Neural Matrix Factorization with Gradient Rollback
- January 19: [YouTube Video] Talk at the Zurich NLP Meetup on “From Text & Graphs to Explainable New Knowledge”
- Area Chair for EMNLP
- Reviewer for: ACL, NAACL, RepL4NLP, DeeLIO
- December 12: Video Presentation of Learning from Human Feedback: Challenges for Real-World Reinforcement Learning in NLP at the Challenges of Real-World RL Workshop, co-located with NeurIPS 2020.
- December 9: Poster presentation of Gradient Rollback at the WiML workshop, co-located with NeurIPS 2020.
- September 30: Talk at the virtual Natural Language Processing Copenhagen Meetup on “Bidirectional Sequence Generation and Graph AI”.
- September 24: Talk at the first virtual Heidelberg Laureate Forum (HLF) about “The Knowledge Pipeline: Extract, Enrich and Explain”.
- July 14: Blog post that summarices the ACL 2020 track “Interpretability and Analysis of Models for NLP”
- July 3-10: I was an official microblogger for ACL 2020, focusing on the track “Interpretability and Analysis of Models for NLP” (see also my Twitter handle below). I’m also one of the mentors at ACL 2020.
- May 13: Talk at the StatNLP Colloquium about “Bidirectional Sequence Generation and its Prototype Transfer”.
- April 30: Talk at KMD’s STEAM talk series about how to “Extract, Enrich & Explain Knowledge”.
- February 3: New blog post about our paper “Attending to Future Tokens for Bidirectional Sequence Generation”
- January 14: Lecture on reinforcement learning and its applications to sequence-to-sequence NLP at the ECOLE winter school
- Area Chair for EACL
- Reviewer for: AAAI, EMNLP, ACL, IJCAI, COLING, RepL4NLP, NAACL