Insights and Inspirations from Leading Scholars for Future Computational Social Scientists

Howard University

Howard University

This blog is part of a 3-year ongoing series “The Future of Computational Social Science is Black” about SICSS-Howard/ Mathematica, the first Summer Institute in Computational Social Science held at a Historically Black College or University. To learn more about SICSS-H/M’s inaugural start, read the 2021 blog “Welcome SICSS-Howard/ Mathematica 2021” or our first blog “Uncovering new keys to countering anti-Black racism and inequity using computational social science.” If you are interested in applying to participate in SICSS-H/M 2024, check out our website.


SICSS-Howard/Mathematica 2023 participants were joined by three guest speakers during the second week: Desmond U. Patton, Ph.D., Imani N. S. Munyaka, Ph.D., and Deen Freelon, Ph.D. Speakers shared their recent and ongoing projects of pertinence to the projects of computational social science in pre-recorded video.

Dr. Desmond U. Patton

Desmond U. Patton, Ph.D., Brian and Randi Schwartz University Professor, Social Policy, University of Pennsylvania & Director, SAFElab: “Community Data Science Approaches for Gun Violence Prevention”

Dr. Desmond U. Patton was the first guest speaker at SICSS-H/M 2023. His research focuses on the relationship between social media and gang violence, specifically how communities constructed online can often initiate harmful behavior offline. For example, between 2012 and 2015, his team conducted a quantitative study where they observed how disrespect of grief can be an amplifier or trigger for offline violence. His presentation addressed the broader impact of social media on gun violence prevention and the use of artificial intelligence in this work.

Professor Patton’s team created a methodology called CASM (Contextual Analysis of Social Media), a process to contextually interrogate text and develop domain-specific decisions to label social media data for the training of an algorithm system. They also developed VATAS (Visual & Text Analysis System) to allow annotators, including community domain experts and students outside of the community who are trained to become social workers at the master level, to grapple with context around social media data. One of Professor Patton’s ethical considerations is the digital surveillance and policing aspect of his work, and he proposed that the deployment of CASM and VATAS be accompanied by a robust set of ethical guidelines.

Professor Patton concluded his talk by describing the dichotomy of social media and artificial intelligence. While social media is a powerful tool for understanding or preventing root causes of violence, and AI offers an opportunity to augment current violence prevention efforts, they are not the sole answer to violence prevention. Furthermore, when not used thoughtfully and correctly, they can be tools for surveillance and mass incarceration. He calls for critical and diverse perspectives, voices, and applications of social media for AI. These reflections inspired Professor Patton to further his work by developing new methods and partnerships, and has informed the building of the Penn Center for Inclusive Innovation and Technology which will fuse anti-racist frameworks, social work thinking, and rigorous analytical methods to tackle thorny social issues including, but not limited to, gun violence, trauma, and grief.

Dr. Imani N.S. Munyaka Headshot

Dr. Imani N.S. Munyaka

Imani N. S. Munyaka, Ph.D., Assistant Professor, Computer Science and Engineering, University of California, San Diego & founder of Ujima Security and Privacy Research Group: “Computational Social Science for Privacy, Security and Social Equity”

Dr. Imani N.S. Munyaka was the second guest speaker at SICSS-H/M 2023, and is also a SICSS-H/M 2022 alumna. Her research focuses on security, privacy, and computer interaction, with the general goal of equity in user experiences and outcomes for marginalized identities. In her presentation, Professor Munyaka pointed out a few areas of interest related to her work, including data security and privacy, misinformation, behavior modeling, and social engineering schemes. One example of social engineering schemes is spam calls, which is any call a person receives that has malicious intent, potentially leading to violation of their privacy and financial loss.

As Professor Munyaka argues, even though there are multiple solutions implemented currently such as spam call detection apps and authentication by carriers, people should still care about this problem because it interrupts the important and unique services a phone call can provide. Motivated by this issue, her research investigates the way users respond to incoming calls, the decision process which leads to their answering of calls, and the ways spam call apps communicate information to the visually impaired. In the experiment, she discovered that once the calls from unknown numbers are authenticated by user experiences or applications, users may be more likely to answer them. Alternatively, once the calls from known numbers are labeled as spam, users still tend to answer them. Professor Munyaka also tested the accessibility of spam call detection apps and realized that many buttons did not have labels or were not labeled correctly, adding extra steps for people who are visually impaired. She raised some future research questions for SICSS participants to consider, such as the reason people respond the way that they do in the domain of social engineering schemes, and the reason that developers and organizations fail to address the needs of marginalized and minoritized groups in the domain of social equity.

Dr. Deen Freelon Headshot

Dr. Deen Freelon

Deen Freelon, Ph.D., Associate Professor, Hussman School of Journalism and Media, University of North Carolina-Chapel Hill & Principal Researcher, Center for Information, Technology, and Public Life: “Analyzing Social Media from a User-Eye View with PIEGraph”

Dr. Deen Freelon was the third guest speaker at SICSS-H/M 2023. His theoretical interest addresses how ordinary citizens use social media and other digital communication technologies for political purposes. He presented his study on digital politics and a user-centric view analytical tool his team developed called the Personalized Information Environment Graph (PIEGraph). This tool consists of a social media app to collect user-eye view data, a database of over 3,000 web domains ore-tagged for political bias and factual content, and an opt-in survey to collect basic demographic/political/media opinion data.

One of the functions integrated into this system is Media Bias/Fact Check, which rates the political bias and fact quality of media sources, and has comprehensive coverage of many widely-used English-language news and political sites. Employing this rating, Professor Freelon was able to develop a more nuanced picture of the level of fact quality in any individual’s personal information environment. He recruited 1,000 US demo-matched users from a panel company and compensated them for granting PIEGraph permission to collect their data for at least a year. He observed that there was a quadratic relationship indicating that people with mean ideological bias scores on either extreme (left or right) tend to have low mean fact quality scores. However, it must be considered that Media Bias/Fact Check has a tendency to score sites on the ideological extremes lower in fact quality. Engagement with low fact quality sites may also be predicted differently based on the specific conspiracy theory. In general, people are affected by much more than they click on and this effect is amplified over time by illusory truth effects and confirmation bias.  

Professor Freelon concluded by discussing the advantages and disadvantages between PIEGraph and other similar approaches. For example, PIEGraph allows investigators to see what the users see regardless of whether they click on it or not, and its pre-tagged database is highly extensible. However, it relies on the Twitter API and could be vulnerable to corporate data policy. In the future, the pre-tagged database can be extended to cover all types of use cases and thus provides information for diverse research topics.

This year’s guest speaker presentations at SICSS-Howard Mathematica were fruitful. We thank Dr. Patton, Dr. Munyaka, and Dr. Freelon for taking the time to share their work with the next generation of computational social scientists.

For more information about SICSS-Howard/Mathematica, check out our website, follow us on Twitter, like us on Facebook, and join our email list. The application for SICSS-Howard/Mathematica 2024 is open! Apply now!

About the authors

Naniette Coleman

Naniette H. Coleman is a PhD candidate in the Sociology Department at the University of California, Berkeley and the founder of SICSS-Howard/Mathematica. Her work sits at the intersection of the sociology of culture and organizations and focuses on cybersecurity, surveillance, and privacy in the US context. Specifically, Naniette’s research examines how organizations assess risk, make decisions, and respond to data breaches and organizational compliance with state, federal, and international privacy laws.  Naniette holds a Master of Public Administration with a specialization in Democracy, Politics, and Institutions from the Harvard Kennedy School of Government, and both an M.A. in Economics and a B.A. in Communication from the University at Buffalo, SUNY.  A non-traditional student, Naniette’s prior professional experience includes local, state, and federal service, as well as work for two international organizations, and two universities. Naniette is also passionate about the arts.


Amber Du

Nianyao (Amber) Du received her Bachelor of Arts in Applied Mathematics and Statistics from the University of California, Berkeley and is currently. She is currently a second-year master’s student studying Statistics at the National University of Singapore. Amber served as a research assistant and project lead, co-lab manager in the AAC&U award-winning, Berkeley based Interdisciplinary Research Group on Privacy under Ph.D. Candidate Naniette Coleman. Amber also served as an Event Manager for SICSS-Howard/Mathematica 2023 and 2022. In 2018, she received the Edward Kraft Award. Her professional interest lies at the crossroads of machine learning, environmental justice, biostatistics, and privacy.


Explore more posts from the series: The Future of Computational Social Science is Black

Previous
Previous

Avoid Scams, Imposters, and Fraud in Online Research Participation

Next
Next

Ethical Dilemmas for Data Collection on Social Media