
Academic article
Social Media + Society
2026-04-16
Authors: Alice E. Marwick
Subjects: social media,history of the Internet
Methodology:
Summary:
Abstract: This essay reflects on the lived experience of early internet culture to interrogate what has been lost in the transition to today’s platform-dominated online environment. Drawing on autobiographical fieldnotes from the 1990s and early 2000s—Prodigy forums, IRC channels, university bulletin boards, and most centrally LiveJournal—I revisit a period when online communication fostered intimacy, community, and meaningful social ties among strangers and friends alike. LiveJournal, in particular, offered an infrastructure for sustained reciprocal writing, affective labor, and audience management that enabled deep connection and mutual support. Its social dynamics illuminate a mode of computer-mediated communication that was less commercialized, less surveilled, and more oriented toward collective meaning-making than contemporary social media. By contrast, today’s social platforms feel alienating, extractive, and hostile to vulnerability. The political economy of social media, driven by advertising, surveillance, consolidation, and algorithmic optimization, has foreclosed the kinds of small, semi-private, socially coherent spaces that once enabled genuine community formation. Rather than imagining social media as infrastructure requiring stewardship, safety, and care, the industry has prioritized virality, scale, and profit, producing environments shaped by harassment, polarization, and corporate capture. Reflecting on these shifts, the essay argues that the trajectory of social media was never inevitable. Alternative design choices and governance models might have cultivated a richer, more humane digital public sphere. If online community has a future, it will not lie in replicating legacy platforms, but in reimagining communication infrastructures that support vulnerability, reciprocity, and small-scale sociality, the qualities that once made the early internet feel like home.
Policy brief
Data & Society Research Institute
2026-04-08
Authors: Maia Woluchem, Cella Sum
Subjects: artificial intelligence,infrastructure,data center,public policy,Pennsylvania
Methodology:
Summary:
Article
The Baffler
2026-04-08
Authors: Brian J. Chen, Jai Vipra
Subjects: artificial intelligence,military technology
Methodology:
Summary:
Article
Siegel Family Endowment
2026-04-07
Authors: Alexandra Mateescu
Subjects: artificial intelligence,gig economy,labor,future of work
Methodology:
Summary:
Academic article
Political Communication
2026-04-06
Authors: Alice E. Marwick, Elaine Schnabel, Shannon McGregor, Carolyn Schmitt
Subjects: politics,social media,Twitter,disinformation,Facebook
Methodology:
Summary:
Abstract: "Rather than framing disinformation as false facts which can be countered by true facts, we propose a model of disinformation as narrative by tracing three case studies of successful disinformation across Facebook and Twitter. As stories, disinformation disseminates throughout culture and exists at all levels of media and across genres. Using a dataset of content hosted on URLs shared widely on social media corresponding to U.S. left, right, and nonpartisan examples of disinformation, we examine how successful disinformation circulates as narratives across platforms and genres. We find that all three case studies meet the formal criteria of narrative, and that narrative is intrinsically emotional and moral, providing catharsis for those who share it. Understanding that successful disinformation narratives are enforced by cultural forces, are intrinsically linked to identity, and hold deep emotional resonance for those with whom they resonate has vast implications for responding to and countering them."
Audio
The DSR Network
2026-04-03
Authors: David Rothkopf, Simon Rosenberg, Tara McGowan, Anya Schiffrin, Alice E. Marwick
Subjects: artificial intelligence,government regulation,deepfake,scams
Methodology: podcast
Summary:
"Nobody wants to get scammed, yet everyone has a story about it happening to them or someone they know. In the age of AI, scams are more elaborate and devious than ever. So what can you do about it? Anya Schiffrin and Alice Marwick join David Rothkopf to discuss their recent policy brief about the scourge of AI scams, what governments can do to stem the growing tide of scams, and how individuals can protect themselves."
Report
Data & Society Research Institute
2026-04-01
Authors: Alexandra Mateescu, Aiha Nguyen, Sanjay Pinto
Subjects: artificial intelligence,labor,discrimination,labor rights
Methodology: primer
Summary:
The tech industry’s promise of an AI-driven economic future depends on automating jobs and displacing workers while strengthening its own power. In a speculative race to build an “AI first” economy, corporate spending on AI is climbing to new heights. While policymakers are anticipating a future of mass job displacement and large corporations continue to accumulate power, workers face an ever more hostile political environment. Recent policymaking has centered anti-worker policies, hollowing out standard labor rights and protections and effectively re-writing the social contract for workers. At the same time, private companies are building out AI technologies in ways that further entrench inequalities in the US and globally.
But the bleakness of this vision is not a foregone conclusion. To build a different future requires us to understand and change the structures of power, control, and ideology behind AI adoption in the workplace. In this primer, Alexandra Mateescu, Aiha Nguyen, and Sanjay Pinto offer a framework for the institutional, political, and economic shifts that underpin AI adoption. They argue that the sprint to create the so-called AI-first economy must be understood not as the logical march of progress, but as a series of deliberate economic decisions that risk harming entire populations of workers in ways both old and new. Building a worker-driven future — one in which AI is subject to democratic oversight — will require rigorous, timely analysis of how workers are experiencing AI’s impact to support organizing, bargaining, and policy work.
Blog post
Data & Society Points
2026-04-01
Authors: Meryl Ye
Subjects: artificial intelligence,generative artificial intelligence,infrastructure and economics,mental health,chatbot
Methodology:
Summary:
Audio
Mystery AI Hype Theater 3000
2026-03-31
Authors: Emily M. Bender, Alex Hanna, Maia Woluchem, Livia Garofalo
Subjects: artificial intelligence,data center,energy management,Pennsylvania
Methodology:
Summary:
"We knew that the energy demands of data centers were preventing dirty energy sources from being sunsetted. Now hyperscalers are reaching even further, resurrecting Pennsylvania's infamous Three Mile Island. Emily and Alex are joined by Maia Woluchem and Dr. Livia Garofalo, who have researched the impacts of data center construction across PA."
Audio
Bartholomewtown
2026-03-31
Authors: David Altounian, Timothy H. Henry, Michael Littman, Briana Vecchione
Subjects: artificial intelligence
Methodology: podcast
Summary:
"Following a newportFILM presented screening of The AI Doc: Or How I Became an Apocaloptimist at The Jane Pickens Theater Bill Bartholomew moderates an expert panel on AI's growth and impact on Rhode Island."
Blog post
Data & Society Points
2026-03-25
Authors: Serena Oduro, Briana Vecchione, Meryl Ye, Livia Garofalo
Subjects: artificial intelligence,governance of artificial intelligence,public policy,mental health,chatbot
Methodology:
Summary:
Audio
Radio Bilingüe
2026-03-20
Authors: Mariana Pineda, Livia Garofalo
Subjects: artificial intelligence,mental health,chatbot
Methodology:
Summary:
Blog post
Data & Society Points
2026-03-18
Authors: Nicholas E. Stewart
Subjects: artificial intelligence,governance of artificial intelligence,public policy,media manipulation,criminal justice,policing
Methodology:
Summary:
Article,Audio
Tech Policy Press
2026-03-13
Authors: Justin Hendrix, Alice E. Marwick, Anya Schiffrin
Subjects: artificial intelligence,privacy,deepfake,cryptocurrency,scams,fraud
Methodology: interviews
Summary:
Audio
CAPTivated
2026-03-05
Authors: Hanna Sistek, Sage Goodwin, Julius Freeman, Alice E. Marwick
Subjects: artificial intelligence,generative artificial intelligence,governance of artificial intelligence,privacy,mental health,chatbot
Methodology: podcast
Summary:
Blog post
Data & Society Points
2026-03-04
Authors: Ranjit Singh
Subjects: artificial intelligence,generative artificial intelligence,historiography
Methodology:
Summary:
Policy brief
Data & Society Research Institute
2026-03-02
Authors: Anya Schiffrin, Alice E. Marwick, Navya Sinha, Anusha Wangnoo, Kaylee Williams, Elnara Huseynova, Audrey Hatfield
Subjects: artificial intelligence,governance of artificial intelligence,government regulation,deepfake,cryptocurrency,scams,fraud
Methodology:
Summary:
Deepfake-enabled financial fraud exposes the limits of regulatory frameworks that rely on individual vigilance in the face of industrialized deception. Surveying regulatory approaches around the world, the authors of this brief argue that effective responses to deepfake financial fraud must shift from individual responsibility toward institutional accountability, and outline policy recommendations to that end.
Article
New Internationalist
2026-03-01
Authors: Livia Garofalo,Maia Woluchem
Subjects: artificial intelligence,infrastructure and economics,data center,post-industrial society,activism,Pennsylvania
Methodology:
Summary:
Panel,Audio,Video,Event
Data & Society Research Institute
2026-02-26
Authors: Luca Belli, Miranda Bogen, Marlynn Wei, Livia Garofalo, Briana Vecchione
Subjects: artificial intelligence,governance of artificial intelligence,government regulation,young people,mental health,chatbot
Methodology:
Summary:
While many people have found benefit and respite in using chatbots for companionship, mental health, and emotional support, the widespread adoption of these tools has also resulted in harm and raised deep concerns about identity and safety. How are chatbots shaping people’s understanding of themselves? What concerns do therapists have about their use? How might these tools be designed and implemented to prioritize users’ wellbeing? What kinds of guardrails, regulations, and safety protocols might be effective?
In connection with Data & Society’s ongoing research on mental health and chatbots, we explored these questions and more in a conversation moderated by researchers Livia Garofalo and Briana Vecchione. Together with Luca Belli, AI safety lead at Spring Health; Miranda Bogen, founding director of the AI Governance Lab at the Center for Democracy & Technology; and psychiatrist and psychotherapist Marlynn Wei, they discussed the profound shifts in how people seek help and support, and how mental health professionals, policymakers, and tech designers are navigating these shifts now.
Blog post
Data & Society Points
2026-02-25
Authors: Teanna Barrett
Subjects: artificial intelligence,decolonization,global majority,ethics of artificial intelligence,data sovereignty,Africa
Methodology:
Summary: