Talent

Announcing Our LinkedIn-Cornell 2024 Grant Recipients

From the Inaugural Year to our year-three cohort, LinkedIn has progressively aligned grant initiatives to academic and industry challenges, demonstrating a deepening collaboration with academic research. Focused on foundational research, our partnership with the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS) addresses critical, real-world issues. 

This year, four faculty members and four doctoral students from the Cornell Bowers CIS are the latest recipients of annual grants from the college’s five-year partnership with LinkedIn. These grant recipients are tackling diverse and pressing topics: developing models to predict and optimize long-term user behavior on social networks, ensuring fairness in predictive algorithms, leveraging large language models for actionable insights in knowledge graphs, and creating frameworks for privacy-enhancing technologies.

Additionally, doctoral student projects are exploring the impact of "multi-sided fairness" in recommendation algorithms, enhancing user trust through personalized language in large language models, improving multi-GPU cluster efficiency for generative models, and developing adaptive, safe reinforcement learning algorithms for healthcare and conversational agents. 

These initiatives reflect a significant shift towards addressing complex technological and societal challenges, embodying LinkedIn's commitment to advancing technology with a focus on fairness, inclusion, and long-term positive impacts. Our eight recipients are:

Faculty award winners

grant recipient Sarah Dean

Sarah Dean, assistant professor of computer science, believes the algorithms that power social network platforms are too short-sighted. The models anticipate short-term engagement, like clicks, but fail to capture longer-term impacts, like a user’s growing distaste of clickbait headlines or educational content that no longer serves their skillset. In “User Behavior Models for Anticipating and Optimizing Long-term Impacts,” Dean seeks to develop models that can anticipate long-term user dynamics and algorithms that can optimize long-term impacts.

Grant recipient Michael P. Kim

Michael P. Kim, assistant professor of computer science, will explore fairness in algorithmic predictive models in his project, “Prediction as Intervention: Promoting Fairness when Predictions have Consequences.” Today's predictive algorithms can influence the outcomes they are meant to predict. For instance, algorithms may help job seekers connect with relevant companies, making it more likely for them to get hired by the company. Kim's project aims to understand the potential for such algorithms to cause harm by overlooking individuals from marginalized groups, but also to promote new opportunities through deliberate predictions.

Grant recipient Jennifer J. Sun

Jennifer J. Sun, assistant professor of computer science, aims to leverage large language models (LLMs) to process text data from knowledge graphs. The goal of her project, “Learning and Reasoning Reliably from Unstructured Text,” is to use LLMs to develop a system to synthesize the text data into actionable insights. Sun aims to develop algorithms that could scale to industry-level applications, for example, for use in tasks such as skills matching and career recommendations.

Grant recipient Daniel Susser

Daniel Susser will explore misalignments between the ways different actors conceptualize and reason about privacy-enhancing technologies (PETs) – statistical and computational tools designed to help data collectors process and learn from personal information while simultaneously protecting individual privacy. In “Navigating Ethics and Policy Uncertainty with Privacy-Enhancing Technologies,” Susser will develop shared frameworks for data subjects, researchers, companies, and regulators to better reason, deliberate, and communicate about the use of PETs in real-world contexts.

Doctoral student award winners

Grant recipient Sophie Greenwood

Sophie Greenwood, a doctoral student in the field of computer science advised by Nikhil Garg and Jon Kleinberg, will test the effectiveness of recommendation algorithms that aim to achieve “multi-sided fairness,” which try to balance fairness and performance. In her project, “Effects of User and Item Fairness Constraints on Markets and Individuals,” Greenwood will investigate questions like: Does multi-sided fairness improve long-term platform engagement on both sides of the market?

Grant recipient Kowe Kadoma

Kowe Kadoma, a doctoral student in the field of information science advised by Mor Naaman, studies how feelings of inclusion and agency impact user trust in artificial intelligence. In her project, “The Effects of Personalized LLMs on Users’ Trust,” Kadoma will expand on existing research that finds LLMs often produce language with limited variety, which may frustrate or alienate users. The goal is to improve LLMs so that they produce more personalized language that matches users’ language style.

Grant recipient Abhishek Vijaya Kumar

Abhishek Vijaya Kumar, a doctoral student in the field of computer science advised by Rachee Singh, will develop systems and algorithms to efficiently share the memory and compute resources on multi-GPU clusters. The goal of the project, called “Responsive Offloaded Tensors for Faster Generative Inference,” is to improve the performance of memory and compute bound generative models.

Grant recipient Kaiwen Wang

Kaiwen Wang, a doctoral student in the field of computer science advised by Wen Sun and Nathan Kallus, will develop reinforcement learning (RL) algorithms that are adaptive, safe, and more efficient and steerable at run-time. In “Steerable Interactive Agents via Efficient and Safe RL,” Wang will apply them to important applications such as healthcare and conversational agents.

Congratulations to this year’s grant recipients. We’re excited for another year of collaboration with the bright minds at Cornell Bowers CIS and can’t wait to see the results of these important research initiatives.