Research
News
Publication
Talks
CV
Github
email: anubrata.das[at]utexas.edu twitter: @d_anubrata mastodon: sigmoid.social/@anubroto bluesky: anubrata.bsky.social
I am a Ph.D. candidate at the School of Information at the University of Texas at Austin. I am co-advised by Dr. Matt Lease and Dr. Jessy Li. I am a part of the Laboratory for Artificial Intelligence and Human-Centered Computing (AI&HCC) and associated with the UT NLP Group. During my PhD, I have also interned at Amazon Alexa Responsible AI Research, Cisco Responsible AI Research, and The Max-Planck Institute for Informatics where I worked with Dr. Gerhard Weikum.
Before joining the Ph.D. program, I worked as a Software Engineer in Microsoft and as a Decision Scientist in Mu Sigma. I received my Bachelor of Engineering Degree in Computer Science and Technology from IIEST, Shibpur.
I am interested in the intersection of Natural Language Processing and Human-Computer Interaction, specifically focused on developing NLP technologies that complement the capabilities of human experts. My work centers on three key thrusts of research:
Human-Centered NLP: How can we identify the needs of the stakeholders for the practical adoption of NLP applications? How can we evaluate if NLP applications are meeting stakeholder needs? How can research in human-centered NLP help push forward basic NLP research? How can we align NLP models to complement human experts in critical fields effectively? [Preprint] [IPM Journal]
Interpretable NLP Models: How can we build NLP models to help stakeholders understand its inner workings? How can we effectively evaluate interpretable models? How can we use insights from interpretable models to steer generative model outputs? How can we build interpretable models to help promote responsible and productive human-AI partnerships? [ACL’22] [IPM Journal]
Responsible Language Technologies: How can we detect and mitigate potential harms caused by language technonologies? How can we make these models behave responisbly and not perpetuate societal biases? How can we protect workers who contribute to data-collection for AI? [FnTIR Journal] [HCOMP’20] [ASIS&T’19]
Our co-design paper on Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI has been awarded an honorable mention (top 3%) at CSCW 2024 [Arxiv]
I spent Fall 2023 as a research intern at Cisco Responsible AI research team and worked on evaluating interpretable NLP models
I spent Summer 2023 at the Amazon Alexa Responsible AI team and worked on developing interpretable NLP
Paper on Human-Centered NLP for Fact-Checking is published in a special issue of the IPM (Impact Factor: 6.222) journal [Arxiv]
Paper on Explaining Black-box NLP models with Case-based reasoning is accepted in ACL 2022. [arxiv] [code]
Paper on Interactive AI for Fact-Checking is accepted in ACM CHIIR 2022[arxiv]
Full list of publications on: (Google Scholar) (* = equal contribution)
🏅 Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI
Houjiang Liu*, Anubrata Das*, Alexander Boltz*, Didi Zhou, Daisy Pinaroc, Matthew Lease, Min Kyung Lee
CSCW 2024, Best paper honorable mention (top 3%)
The state of human-centered NLP technology for fact-checking [Arxiv]
Anubrata Das, Houjiang Liu, Venelin Kovatchev, and Matthew Lease.
Information processing & management 60, no. 2 (2023): 103219.
True or false? Cognitive load when reading COVID-19 news headlines: an eye-tracking study
Li Shi, Nilavra Bhattacharya, Anubrata Das, and Jacek Gwizdka.
In Proceedings of the 8th ACM SIGIR Conference on Human Information, Interaction and Retrieval (CHIIR), 2023.
ProtoTex: Explaining Model Decisions with Prototype Tensors [code] |[slides] | [Talk] | [Poster]
Anubrata Das*, Chitrank Gupta*, Venelin Kovatchev, Matthew Lease, and Junyi Jessy Li.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims
Li Shi, Nilavra Bhattacharya, Anubrata Das, Matthew Lease, and Jacek Gwizdka.
In Proceedings of the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval (CHIIR), 2022.
Fairness in Information Access Systems
Michael D. Ekstrand, Anubrata Das, Robin Bruke, Fernando Diaz
Foundations and Trends in Information Retrieval, 2022
Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content
Anubrata Das, Brandon Dang, and Matthew Lease
AAAI Conferenece on Human Computation, 2020
Dataset bias: A case study for visual question answering
Anubrata Das, Samreen Anjum and Danna Gurari
Proceedings of the Association for Information Science and Technology 56, no. 1 (2019): 58-67.
CobWeb: A Research Prototype for Exploring User Bias in Political Fact-Checking
Anubrata Das, Kunjan Mehta, and Matthew Lease
FACTS-IR Workshop, SIGIR 2019. [slides]
A Conceptual Framework for Evaluating Fairness in Search
Anubrata Das and Matthew Lease
arXiv preprint arXiv: arXiv:1907.09328 (2019)
Interactive information crowdsourcing for disaster management using SMS and Twitter: A research prototype
Anubrata Das, Neeratyoy Mallik, Somprakash Bandyopadhyay, Sipra Das Bit, and Jayanta Basak
IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops) 2016
Predicting trends in the twitter social network: A machine learning approach
Anubrata Das, Moumita Roy, Soumi Dutta, Saptarshi Ghosh, and Asit Kumar Das
In International Conference on Swarm, Evolutionary, and Memetic Computing, Springer, Cham, 2014.