Room 3160, Torgersen Hall
620 Drillfield Drive
Blacksburg, VA 24061
I am a PhD Candidate in the Dept. of Computer Science @Viginia Tech. I started here in the fall of 2017 specializing in the field of Human-Computer Interaction. I am being co-advised by Tanushree Mitra@UW and Sang Won Lee. My research interest lies in the intersection of computer science and communication, especially in designing interventions to help users navigate online information space. I am also working with Mike Horning@CommSchool in several projects. I apply both qualitative and quantitative methods in my research. See the projects and publications for more detail. My research has been published in CSCW and CHI. Before this, I recieved a bachelor’s degree in computer science from BUET, followed by working as a software engineer @Reve Systems.
Check out my resume here.
|Feb 6, 2022||I will present our paper in CHI 2022 this may|
|Aug 6, 2021||I will present our papers in CSCW 2021 this october|
|May 6, 2021||I will be joining as a summer research intern @Hacks/Hackers|
|Oct 19, 2020||I will be presenting our paper on CSCW’2020.|
Abstract: To promote engagement, recommendation algorithms on platforms like YouTube increasingly personalize users’ feeds, limiting users’ exposure to diverse content and depriving them of opportunities to reflect on their interests compared to others’. In this work, we investigate how exchanging recommendations with strangers can help users discover new content and reflect. We tested this idea by developing OtherTube—a browser extension for YouTube that displays strangers’ personalized YouTube recommendations. OtherTube allows users to (i) create an anonymized profile for social comparison, (ii) share their recommended videos with others, and (iii) browse strangers’ YouTube recommendations. We conducted a 10-day-long user study (n=41) followed by a post-study interview (n=11). Our results reveal that users discovered and developed new interests from seeing OtherTube recommendations. We identified user and content characteristics that affect interaction and engagement with exchanged recommendations; for example, younger users interacted more with OtherTube, while the perceived irrelevance of some content discouraged users from watching certain videos. Users reflected on their interests as well as others’, recognizing similarities and differences. Our work shows promise for designs leveraging the exchange of personalized recommendations with strangers.
Paper link: https://arxiv.org/abs/2201.11709
Abstract: Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges—a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred–a browser extension for Twitter. NudgeCred directs users’ attention to two design cues: authority of a source and other users’ collective opinion on a report by activating three design nudges—Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets’ credibility, unrestricted by three behavioral confounds—political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users’ would trust. Our work informs nudge-based system design approaches for online media.
Paper link: https://dl.acm.org/doi/pdf/10.1145/3479571
Abstract: As news organizations embrace transparency practices on their websites to distinguish themselves from those spreading misinformation, HCI designers have the opportunity to help them effectively utilize the ideals of transparency to build trust. How can we utilize transparency to promote trust in news? We examine this question through a qualitative lens by interviewing journalists and news consumers—the two stakeholders in a news system. We designed a scenario to demonstrate transparency features using two fundamental news attributes that convey the trustworthiness of a news article: source and message. In the interviews, our news consumers expressed the idea that news transparency could be best shown by providing indicators of objectivity in two areas (news selection and framing) and by providing indicators of evidence in four areas (presence of source materials, anonymous sourcing, verification, and corrections upon erroneous reporting). While our journalists agreed with news consumers’ suggestions of using evidence indicators, they also suggested additional transparency indicators in areas such as the news reporting process and personal/organizational conflicts of interest. Prompted by our scenario, participants offered new design considerations for building trustworthy news platforms, such as designing for easy comprehension, presenting appropriate details in news articles (e.g., showing the number and nature of corrections made to an article), and comparing attributes across news organizations to highlight diverging practices. Comparing the responses from our two stakeholder groups reveals conflicting suggestions with trade-offs between them. Our study has implications for HCI designers in building trustworthy news systems.
Paper link: https://dl.acm.org/doi/pdf/10.1145/3479539
Abstract: Misinformation about critical issues such as climate change and vaccine safety is oftentimes amplified on online social and search platforms. The crowdsourcing of content credibility assessment by laypeople has been proposed as one strategy to combat misinformation by attempting to replicate the assessments of experts at scale. In this work, we investigate news credibility assessments by crowds versus experts to understand when and how ratings between them differ. We gather a dataset of over 4,000 credibility assessments taken from 2 crowd groups—journalism students and Upwork workers—as well as 2 expert groups—journalists and scientists—on a varied set of 50 news articles related to climate science, a topic with widespread disconnect between public opinion and expert consensus. Examining the ratings, we find differences in performance due to the makeup of the crowd, such as rater demographics and political leaning, as well as the scope of the tasks that the crowd is assigned to rate, such as the genre of the article and partisanship of the publication. Finally, we find differences between expert assessments due to differing expert criteria that journalism versus science experts use—differences that may contribute to crowd discrepancies, but that also suggest a way to reduce the gap by designing crowd tasks tailored to specific expert criteria. From these findings, we outline future research directions to better design crowd processes that are tailored to specific crowds and types of content.
Paper link: https://dl.acm.org/doi/pdf/10.1145/3415164
About: In this project, we analyze 18 users interaction with a virtual bulletin board with 100 documents, like the image below. This was a class project with 2 other partners where each of us analyzes biases in different type of user interactions.
My analysis show some of the tendencies in initial and consecutive interactions with the bulletin board.
|Google Landmark Recognition Challenge||
About: In this class project, I compared four machine learning models — VGG, Resnet-50 (trained only last layer), Inception-V4, Resnet-50 (trained the whole model) — on Google Landmark Recognition Challenge. Result-wise, Resnet-50 (trained the whole model) recieved a standing of 129 on Kaggle. I used a Google Cloud Compute machine with a K80 GPU for this project.
From my analysis, we see several interesting results. Training time-wise, resnet-50 (training only the last layer) had the least average time. However, resnet-50 where the whole model was trained outperformed everyone else in other aspects.
About: I analyze three political subreddits — r/chapotraphouse, r/neoliberal, r/the_donald — on the month of february’2020, during US election primaries. Using BigQuery Reddit Dataset, I have collected the image URLs which were later downloaded. Below, we show example images. Using Google Cloud Vision API, I converted the memes to text which were later analyzed using spacy.
|Fact-check for Google Home||
About: Two implementations (App Engine Nodejs and Google Cloud Function for Firebase) of the server. This server uses Google Cloud Claim Review API to find relevant claim and responds to Google Home intent.
|Tweet Sharing Behavior||
About: In this class project, we performed several text analysis on this dataset of around 40 million tweets sharing news from mainstream and non-mainstream news sources. Our analytical tools include: LIWC, SAGE, Readability and Topic Analysis. See the repo for more detail.