Invited Speakers

Jevin West

University of Washington (US) — Information Science

Linda Steg (SAGE keynote)

University of Groningen (NL) — Psychology

Sharad Goel (DIREC keynote)

Harvard Kennedy School (US) — Computer Science

M.J. Crockett

Princeton University (US) — Psychology

Lisa Anne Hendricks

DeepMind (UK) — Computer Science

Stefan Gössling

Linnaeus University and Lund University (SE) — Cultural Geography

Joanna Bryson

The Hertie School of Governance (DE) — Psychology & AI

Tim Althoff

University of Washington (US) — Computer Science

Lauren Brent

University of Exeter (UK) — Behavioral Ecology

Esteban Moro (EPJ keynote)

Universidad Carlos III de Madrid (SP) & MIT (US) — Complexity Science

Keynotes
Jevin West [↑]
Misinformation in and about science

Chair: Claudia Wagner

July 18th, 9:00


Abstract
Science is the greatest of human inventions. Through its norms and procedures, we have improved human health and longevity; we discovered the germ theory of disease, electromagnetism, and plate tectonics. But what about the health of science? Internally, we see challenges, such as the reproducibility crisis, a rise in predatory publishing and pseudoscience, and now integrity threats from generative AI. Externally, we see attacks on science through disinformation campaigns, agnotogenesis, and political point-scoring. Turning the proverbial microscope on science, I will examine some of the potential mechanisms spreading misinformation. This includes the role of perceived expertise and perceived consensus in governing online discussions about science and health related topics. I will also re-examine the reproducibility crisis, in light of varying effects, with potential impact on our own field of computational social science. Based on this work, I will end with intervention strategies for mitigating the effects of misinformation in and about science.
Bio
Jevin West is an Associate Professor in the Information School at the University of Washington. He is the co-founder and the inaugural director of the Center for an Informed Public at UW, aimed at resisting strategic misinformation, promoting an informed society and strengthening democratic discourse. He is also the co-founder of the DataLab at UW, a Data Science Fellow at the eScience Institute, and Affiliate Faculty for the Center for Statistics & Social Sciences. His research and teaching focus on the impact of data and technology on science, with a focus on slowing the spread of misinformation. He is the co-author of the book, "Calling Bullshit: The Art of Skepticism in a Data-Driven World," which helps non-experts question numbers, data, and statistics without an advanced degree in data science.

Linda Steg (SAGE keynote) [↑]
Encouraging pro-environmental actions

Chair: Duncan Watts

July 18th, 16:30


Abstract
Human behavior causes environmental problems, which can be reduced when people would act more sustainably. I will discuss which factors encourage pro-environmental behaviour. Moreover, I explain why people may not always act pro-environmentally, and how consistent pro-environmental actions can be promoted.
Bio
Linda Steg is professor of environmental psychology at the University of Groningen. She studies factors influencing sustainable behaviour, the effects and acceptability of strategies aimed at promoting sustainable behaviour, and public perceptions of technology and system changes. She is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) and the European Academy of Sciences and Arts. She is laureate of the Dutch Royal Decoration with appointment as the Knight of the Order of the Netherlands Lion, and laureate of the Stevin prize of the Dutch Research Council. She was lead author of the IPCC special report on 1.5°C and AR6, and participates in various interdisciplinary and international research programmes in which she collaborates with practitioners working in industry, governments and NGOs

Sharad Goel (DIREC keynote) [↑]
Everything but the Kitchen Sink

Chair: Duncan Watts

July 18th, 17:15


Abstract
When estimating the risk of an adverse outcome, common statistical guidance is to include all available factors to maximize predictive performance. But I’ll argue that this popular "kitchen-sink" approach can in fact worsen predictions when the target of prediction differs from the true outcome of interest (e.g., with criminal risk assessments, predicting future arrests as a proxy for future crime). And even when predictions do improve, the benefits are often not as large as generally believed. I'll connect these results to current debates in criminal justice and healthcare.
Bio
Sharad Goel is a Professor of Public Policy at Harvard Kennedy School. He looks at public policy through the lens of computer science, bringing a computational perspective to a diverse range of contemporary social and political issues, including criminal justice reform, democratic governance, and the equitable design of algorithms. Prior to joining Harvard, Sharad was on the faculty at Stanford University, with appointments in management science & engineering, computer science, sociology, and the law school. He holds an undergraduate degree in mathematics from the University of Chicago, as well as a master’s degree in computer science and a doctorate in applied mathematics from Cornell University.

Molly J Crockett [↑]
Algorithm-Mediated Social Learning and the Cultural Evolution of Morality

Chair: Taha Yasseri

July 19th, 9:00


Abstract
Humans are prodigious social learners, and social learning is the engine of cumulative culture. In online social networks, algorithms mediate how we learn from others, with implications for cultural evolution in the digital age. In this talk, I will explore how social learning unfolds in online social networks, with a focus on learning and sharing moral expressions. I'll present empirical work suggesting that algorithmic amplification exploits social learning biases that evolved to promote cooperation, resulting in distorted learning about collective moral beliefs. Finally, I’ll speculate on how algorithm-mediated social learning shapes the cultural evolution of morality.
Bio
Dr. Molly Crockett is an Associate Professor of Psychology at Princeton University. Crockett’s lab investigates moral cognition: how people decide whether to help or harm, punish or forgive, trust or condemn. Their research integrates theory and methods from psychology, neuroscience, economics, philosophy, and data science. Crockett’s recent work has explored moral outrage in the digital age and trust in leaders during the Covid-19 pandemic.

Lisa Anne Hendriks [↑]
Measuring and Mitigating Harm in AI Generated Language

Chair: Roberta Sinatra

July 19th, 16:30


Abstract
Recent language models like ChatGPT and Bard are capable of generating engaging and interesting human-like text. However, as progress on these models progresses, researchers, policy makers, and the public have raised serious concerns about their ability to generate harmful language, including language that is toxic or socially biased. In this talk, I will discuss our work at DeepMind to anticipate ethical risks in language models, then discuss how we measure toxicity and bias in our models. In particular, I'll focus on challenges operationalizing anticipated risks into reliable metrics.
Bio
Lisa Anne Hendricks is a research scientist at DeepMind. Her research interests include the intersection of language and vision and building fair and ethical AI. A common theme in both Lisa Anne’s multimodal and fairness research is probing whether models understand human language or whether they rely on learned correlations, with an aim to both accurately measure and mitigate potential failure modes. At DeepMind, Lisa has led the fairness analysis on many of DeepMind's large models including Gopher, Chinchilla, and Sparrow. She graduated with her PhD from UC Berkeley in 2019, where she focused on image captioning and text-based video retrieval. She completed her undergraduate studies at Rice University in 2013.

Stefan Gössling [↑]
Transport, mobilities, and climate change: Is there an Avenger moment?

Chair: Roberta Sinatra

July 19th, 17:15


Abstract
Transport emissions continue to grow, with very limited evidence of future change: Demand seems to continuously grow, outpacing efficiency gains. This raises the question of the social drivers of transport demand, such as social media, travel influencers, digital nomadism, and the aspirational lifestyles of the carbon elites, as underfeeding personal identities, social norms, and social capital generation. Given the evidence that technology innovations appear to constantly increase demand for travel and transport, the question is raised whether the arrival of AI represents an Avenger moment. Can we trust technology to support the transport transition?
Bio
Stefan is a professor of tourism at the School of Business and Economics, Linnaeus University. He has worked with tourism, transportation, mobilities and climate change for more than 25 years.

Joanna Bryson [↑]
Science and Power in the Context of AI Policy

Chair: Piotr Sapiezynski

July 20th, 9:00


Abstract
Suddenly those of us who work in and with AI have moved from the academic margins into a whirlwind of power, money, international security, and hype. Fortunately, computational social sciences can help us understand these new contexts, and likely consequences of technological (and climactic) transformations. In this talk I review primarily simulation results explaining that cooperation is natural, despotism is too (in some contexts), trust depends on absence of information, and polarisation on economic precarity. I then pivot to data science supporting the polarisation model, and then showing the power dynamics of at least some of the transnational AI regulatory games we are observing.
Bio
Joanna J Bryson is an academic recognised for broad expertise on intelligence, its nature, and its consequences. Holding two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT), Bryson is since 2020 the Professor of Ethics and Technology at Hertie School in Berlin, where she is a founding member of the Centre for Digital Governance. From 2002-2019 she was Computer Science faculty at the University of Bath; she has also been affiliated with Harvard Psychology, Oxford Anthropology, The Mannheim Centre for Social Science Research, The Konrad Lorenz Institute for Evolution and Cognition Research, and the Princeton Center for Information Technology Policy. Bryson advises governments, corporations, and NGOs globally, particularly on AI policy. Her research has appeared in venues ranging from reddit to Science. It presently focuses on the impacts of technology on human societies and cooperation, and improving governance of AI and digital technology.

Tim Althoff [↑]
How Human-AI Collaboration Can Improve Mental Health

Chair: Piotr Sapiezynski

July 20th, 9:45


Abstract
Access to mental health care falls short of meeting the significant need. More than one billion individuals are affected by mental health conditions, with the majority not receiving the necessary treatment. I will describe how human-AI collaboration, critically enabled by language models, can improve access and quality of mental health support. Lacking access to professional care or complementing it, millions of people turn to online peer support communities. While potentially highly beneficial and significantly more accessible, peer supporters are rarely trained in providing effective support. To train peer supporters, we developed a human-AI collaboration tool that teaches how to express empathy effectively, leveraging language models and reinforcement learning. A randomized trial with peer supporters from Talklife, the largest peer support platform globally, demonstrates that actionable, just-in-time AI feedback leads to conversations with higher empathy, outperforming traditional training methods. We further demonstrate how a human-AI collaboration approach can teach individuals how to reframe negative thoughts. Enabled by recent advances in language models, this approach makes the evidence-based intervention of cognitive restructuring widely accessible and easier to engage with. Findings from a randomized field study on the Mental Health America platform with over 50,000 participants, demonstrate far superior engagement and completion rates compared to a non-AI-supported intervention and inform psychological theory regarding which types of reframes lead to a range of positive outcomes. My hope is that as a community and for the collective benefit of society, we will leverage these insights and advances in technology to develop social and communication technology that considers or helps improve mental health for all.
Bio
Tim is an Assistant Professor at the University of Washington. He directs the Behavioral Data Science Group which works on research related to mental health, misinformation, scientific reproducibility, and public health including informing the COVID-19 response. The goal of his research is to better understand and empower people through data. His research develops computational methods that leverage large-scale behavioral data to extract actionable insights about our lives, health and happiness through combining techniques from data science, social network analysis, and natural language processing.

Lauren Brent [↑]
Friendship in animals: Adventures in network science and how an evolutionary perspective helps us better understand human health and behaviour.

Chair: Sune Lehmann

July 20th, 16:30


Abstract
Friendship is crucial for human health and well-being. People who are socially isolated have a greater risk of heart disease than heavy smokers, drinkers, and the obese, and halting social isolation’s ongoing rise is a growing priority for public health and political policy. Coming to grips with our need for friends requires we not only look at how friendship is manifested in contemporary societies but to its origins in our evolutionary past. Yet, the evolutionary origins of friendship and the degree to which friendship’s components reflect human specializations are unclear. My research explores the social networks of other animals to establish the causes and consequences of friendship in evolutionary time and the extent of its human uniqueness. Here, I will show that friendly relationships not only exist outside of humans but are used by other animals to cope with challenges in their environments, with consequences for individual health and longevity. Where my research may differ in the disciplinary perspective it draws from compared to other computational social scientists, we can find common ground in the tools we use and the limitations they can impose on our work. The density of a monkey grooming network is as much a point estimate as the density of a human association network – neither provide us an indication of certainty. My group has thus developed BISoN, a Bayesian framework for inference of social networks generated from observational data. I will introduce BISoN, and show how it generates uncertainty around edges, which can be carried forward into downstream statistical analyses.
Bio
Dr Lauren Brent works as an inter-disciplinary life scientist in the School of Psychology at the University of Exeter. Her research strives to understand the evolution of social relationships in humans and other animals, and the consequences our social networks impose on our health and longevity. Her work is currently funded by an ERC Consolidator Grant, FriendOrigins, and by grants from the National Institutes of Health. She is a member of the Women in Network Science (WiNS) Society.

Esteban Moro (EPJ Data Science keynote) [↑]
Understanding urban social resilience through behavioral mobility data.

Chair: Sune Lehmann

July 20th, 17:15


Abstract
The economic and social progress of our urban areas, institutions, and labor markets depend on the diversity and resilience of the social fabric in cities. However, several major forces can erode these connections, such as income or racial segregation and differences in education and job access. In this talk, I will present recent research on understanding the fragility of the network of social connections and interactions in cities by analyzing behavioral mobility data and its relationship with networked inequalities, such as experienced segregation, access to healthy food, and adaptation to the recent pandemic. I will also discuss potential data-driven interventions aimed at reinforcing the social fabric in cities and mitigating the detrimental impacts of networked inequalities.
Bio
Esteban Moro is a researcher and data scientist at MIT Connection Science and a professor at Universidad Carlos III (UC3M) in Spain. His work lies at the intersection of big data, network science, and computational social science, with a focus on human dynamics, collective intelligence, social networks, and urban mobility in areas such as viral marketing, natural disaster management, and economic segregation in cities. He has received numerous awards for his research, including the "Shared University Award" from IBM in 2007 for his research on modeling viral marketing in social networks and the “Excellence in Research” Awards in 2013 and 2015 from UC3M. Esteban's work has appeared in major journals such as Nature Communications, Nature Human Behavior, PNAS, and Science Advances and is regularly covered by media outlets such as The Atlantic, The Washington Post, The Wall Street Journal, and El País (Spain).