Improving On-Campis Digital Mental Health Support for Underrepresented University Students

Lucretia Williams tries to answer how might the mental health of Black and Latinx students be impacted through an online environment that encourages community, communication, and connection to other Black and Latinx students across a historically black university and a large state university.

She uses a survey study that gathers perspectives on how Black and Latinx students navigate their on-campus counseling services and analyze 60 universities counseling websites.

Decision Parity with Asymmetric loss on Loan Inquiries

Hao-Che Hsu uses the unique proprietary alternative credit data from Experian to investigate the classification algorithm in situations with asymmetric loss.

Based on the loan type information from the inquiry data and the observed financial behavior from the corresponding tradeline data, in order to assess the risk of applicants, he uses a deep neural network to train a classifier with asymmetric losses to find the optimal binary decision and understand issues of algorithmic fairness in credit ratings. He also aims to construct an optimal decision rule on loan inquiries.

Judging (Art)ificial Intelligence: How Do Humans Perceive Creative AI

Alex Bower looks into whether the capacity for creative expression exclusively human? Can AI truly create art?

He investigates perceptions of AL creativity relative to humans and tries to identify potential algorithm aversion/appreciation across subjective/creative domains. He prepares 60 human-created and 20 AI-created (using GPT-2) jokes and lets a group of experiment participants rate the jokes and guessed their source. He also discusses the ethical, legal, and social ramifications of creative AI.

Automated Techniques for Evaluation and Improving the Accessibility of Software

Navid Salehnamadi looks into software accessibility and wants to provide a tool that is not only use-case and assistive-service driven but also fast. He uses the automated tool "Latte." This tool reuses existing GUT tests and reflects the way users with disability interact with apps by using assistive services.

He implemented a prototype of Latte which focuses on users with blindness or motor disability (TalkBack and SwitchAccess) for 20 Android apps. He found 39 failures for TalkBack in 19 apps and 11 failures for SwitchAccess.

Digital Inequality

Nneka Udeagbala aims to investigate the use of the term "digital inequality" in computing literature in order to understand how it has been expanded from digital divide.

Automating Detection of Medical Misinformation on Social Media

Robert Logan tries to design models to automatically identify medical misinformation being spread on socail media that supports rapid addition/modification of misconceptions, robust to novel language, and work in a few-shot setting.

He formulates misinformation detection into misconception retrieval and stance detection classification, and benchmark zero-shot performance of NLP models.

Computational Archival Science and Humans Rights

Bono Olgado focuses on the development and applications of computing technologies and improves their efficiency and predictivity on large-scale document processes and archival functions.

He also studies how these technologies are designed, deployed, and their creation and management of human rights records.

Understanding Human-AI Teams

Aakriti Kumar tries to understand how human works with AI teammates and build better human-AI teams. The AI teammates can be algorithms that are assisting humans. Focusing on the human member of the team, she builds cognitive models that drive cooperative interactions.

Her research explores well-documented biases such as algorithmic aversion and guidance on interface design. The study also explores metacognition that answers how do humans decide when to ask for AI's help and how do humans infer AI's ability.

Deepfakes: Thics, Reasoning and Memory

Nika Nour studies how do people reason their way through fake content or false information, specifically deepfake videos. These videos are edited using a combination of different AI algorithms to replace a person or audio in an original video to make it appear authentic.