Xinya Du
- Assistant Professor of Computer Science
- Human Language Technology Research Institute
- University of Texas at Dallas
-
- Email: xinya.du AT utdallas.edu
I am Xinya Du, a tenure-track assistant professor in the Department of Computer Science at the University of Texas at Dallas.
My research focuses on in the area of Natural Language Processing (NLP) and Machine Learning.
I was a Postdoctoral Research Associate at the University of Illinois at Urbana-Champaign.
Before, I earned a CS Ph.D. degree from Cornell University; and graduated with a bachelor's degree in
CSE from Shanghai Jiao Tong University.
My work are published in leading NLP conferences, and has been covered by major Media and included in the list of Most Influential ACL Papers (15 each year).
I was named a Spotlight Rising Star in Data Science (NLP).
Teaching this term: CS6320 Natural Language Processing.
Prospective Students: If you're interested in working with me, please apply to the UTD CS PhD program and list me as a potential advisor.
Feel free to send me an email after applying -- including your CV, transcript and/or samples of your work (concatenated).
More details for PhD, MS and Intern students Here.
Recent
I enjoy exploring/building things that are novel and impactful (in research and life).
My current research includes building intelligent natural language processing (NLP) systems that are trustworthy, explainable, and align with human values:
-
Document Understanding in the Dynamic World:
How do we create efficient and logically sound NLP systems capable of extracting structured factual knowledge from lengthy and intricate documents.
How do we represent contextual information such as "memory" and "time"?
How do we build efficient systems that can detect incoherent predictions, as well as improve their consistency?
-
Reasoning Capability and Knowledge:
How do we enable NLP systems to conduct faithful and explainable reasoning across modalities. Specifically,
How do we leverage contextual and external knowledge into end-to-end models, reasoning for better faithful and factual behaviors?
How can we exploit models to induce new rules and hypotheses; and understand the reasoning capabilities of large pre-trained models?
How do we comprehensively evaluate their reasoning capabilities and the transparency of explanations, which align closely with human judgements
-
NLP and Vision / Robotics / Society / Law:
How do we design NLP techniques across modalities and for interdisciplinary research.
How do we better align LLMs with human values? How do we enable models (or humans) to understand community policies and behave accordingly?