Senior Manager & Chief Research Scientist
NEC Laboratories Europe
Heidelberg, Germany
Building human-centered, reliable AI for real-world impact and societal benefit
I am Senior Manager and Chief Research Scientist at NEC Labs Europe, where I oversee the Human-Centric AI and Reliable GenAI Solutions groups and lead research on natural language processing, generative AI, and agentic systems. My work focuses on building AI that is understandable, reliable, and useful in practice—advancing human-AI collaboration while translating research into real-world solutions, publications, and patents.
Before joining NEC, I was a graduate research assistant and PhD student at Heidelberg University. During that time I worked on reinforcement learning from human feedback for Generative AI.
Leadership & Service
I lead research across the Human-Centric AI and Reliable GenAI Solutions groups at NEC Laboratories Europe, combining academic research with solution-building for NEC. In 2025 my team and I won the Outstanding Value Award of NEC for developing a technology that helps users spot potential LLM hallucinations. I also contribute to the broader research community through scientific service, including Senior Area Chair roles for ACL/NAACL and recognition with the Outstanding Senior Area Chair Award at ACL 2023.
Contact / Collaboration
I am always interested in conversations around trustworthy generative AI, agentic systems, and human-AI collaboration—especially where research can create practical value in high-impact settings. For collaborations, invited talks, or research discussions, please feel free to reach out.
News
- ACL 2026: two papers accepted on LLM steering and reasoning-model-based evaluation.
- EMNLP 2025: new work on how language model design decisions affect downstream performance.
- ACL 2025: three papers accepted on context attribution, differential-diagnosis agents, and synthetic data evaluation.
Current Focus
Designing AI systems that are supportive, understandable, and aligned with how people actually work and reason.
Building modular and trustworthy agentic AI systems that can support complex workflows and practical NEC applications.
Applying these ideas in domains such as finance, public safety, and other settings where AI should augment human expertise rather than replace it.
Selected Papers
Explores controllable behavior in LLMs, pointing toward more steerable and governable generative systems.
Shows how modular agent frameworks can support explainable decision-making in a high-stakes healthcare setting.
Frames safety as a practical research and deployment challenge for real-world LLM adoption.
Demonstrates how LLMs can support structure discovery with minimal task-specific supervision.