Search the catalog...

Turning the Tide: Climate Action In and Against Tech

Report
Data & Society Research Institute
2025-12-10
Authors: Tamara Kneese
Subjects: artificial intelligence,generative artificial intelligence,data center,activism,supply chain,climate change,sustainability,energy management,environmental impact assessment,environmental justice
Methodology: interviews,participatory methods

Summary:
In Turning the Tide: Climate Action In and Against Tech, Tamara Kneese examines how, in a time of AI ascendance and data center accelerationism, tech workers and larger coalitions have attempted to reform the tech industry from within while applying external forms of pressure through policymaking and activism. Based on 12 months of research alongside climate-conscious tech workers (both inside and outside of companies) this report documents how tech-focused climate work gets done today and highlights its political stakes. It concludes with a series of recommendations for how to help close the gap between corporate sustainability metrics and on-the-ground community resistance.

Comment to the FDA on Generative AI-Enabled Digital Mental Health Medical Devices

Policy brief
Data & Society Research Institute
2025-12-08
Authors: Ranjit Singh, Briana Vecchione, Livia Garofalo, Meryl Ye
Subjects: artificial intelligence,generative artificial intelligence,governance of artificial intelligence,mental health,chatbot
Methodology:

Summary:
In our comment to the Food and Drug Administration (FDA), we draw on ongoing Data & Society research to focus on what people’s actual, everyday use of chatbots for mental and emotional support means for the FDA’s approach to generative AI-enabled digital mental health medical devices. Specifically, we show how chatbot use complicates traditional notions of “intended use,” “benefit-risk,” and “postmarket performance,” and we offer recommendations for how the FDA might adapt its frameworks for devices that act through open-ended, relational conversation.

Fact-Finding in a Failing State: Computer Says Maybe Deep Dive

Audio
Computer Says Maybe
2025-12-05
Authors: Alix Dunn, Megan Price, Janet Haven, Charlton McIlwain
Subjects: artificial intelligence,research ethics,data science,public policy,statistics,sociotechnology,human rights
Methodology:

Summary:
On the Computer Says Maybe podcast, Executive Director Janet Haven and Board President Charlton McIlwain sat down with Alix Dunn to discuss the tech industry’s co-opting of academia, how giving the reins to tech oligarchs hurts people, and why independent research is essential in this moment of AI ascendance.

From Public Concern to Public Power: How 2025 Transformed the Responsible AI Landscape

Panel,Video
All Tech is Human
2025-12-04
Authors: Rebekah Tweed, Janet Haven, Stephanie Bell, Zaina Abi Assy
Subjects: artificial intelligence,governance of artificial intelligence,public policy,ethics of artificial intelligence
Methodology:

Summary:
"The livestream discussion on All Tech Is Human’s 2025 Responsible AI Impact Report highlighted how Responsible AI has rapidly shifted into mainstream public concern, driven by technological advances, growing societal impacts, and a patchwork of evolving governance efforts. Panelists from Data & Society (Janet Haven), Partnership on AI ( Stephanie Bell), and Mozilla Foundation (Zeina Abi Assy) emphasized that while federal protections in the U.S. have weakened, state-level action, global regulatory momentum, and heightened civic engagement are creating new pathways for accountability."

Beyond Silicon Valley: California Data Centers in Context

Blog post
Data & Society Points
2025-12-03
Authors: Tamara Kneese, Cecilia Marrinan
Subjects: data center,public policy,hazardous waste,pollution,environmental impact assessment,Silicon Valley,California
Methodology:

Summary:

Situating Virginia’s Data Center Alley in a New Era of Tech Power

Blog post
Data & Society Points
2025-12-03
Authors: Hannah Lipstein, Tamara Kneese
Subjects: digital infrastructure,data center,surveillance,pollution,security,military technology,Virginia
Methodology:

Summary:

All the Lonely People: On Being Alone with Digital Companions

Blog post
Data & Society Points
2025-12-01
Authors: Livia Garofalo, Briana Vecchione
Subjects: artificial intelligence,large language model,mental health,chatbot
Methodology:

Summary:

From Care Labor to Data Labor: India’s Door-to-Door Health Activists

Blog post
Data & Society Points
2025-11-19
Authors: Priya Goswami
Subjects: artificial intelligence,health care,global majority,labor,digital identity,India
Methodology:

Summary:
From our series Democratizing AI for the Global Majority.

Democratizing AI for the Global Majority

Blog post
Data & Society Points
2025-11-19
Authors: Emnet Tafesse, Abigail Oppong
Subjects: artificial intelligence,democracy,global majority,ethics of artificial intelligence,diversity, equity, and inclusion,colonialism,military technology
Methodology:

Summary:
"This series, curated by researchers Emnet Tafesse and Abigail Oppong in collaboration with Data & Society’s AI on the Ground program, explores the need for a more equitable and inclusive approach to AI development and deployment, one that prioritizes the voices and needs of those in the Majority World, and fosters a more balanced and just technological landscape. Contributors explore the complex layers of tech colonialism, exploring the phenomenon by focusing on its manifestations in health, language, labor, and other areas."

Troubling translation: Sociotechnical research in AI policy and governance

Academic article
Internet Policy Review
2025-11-18
Authors: Serena Oduro, Alice E. Marwick, Charley Johnson, Erie Meyer
Subjects: artificial intelligence,governance of artificial intelligence,public policy,sociotechnology
Methodology: case study

Summary:
Abstract: As technology companies develop and incorporate artificial intelligence (AI) systems across society, calls for a sociotechnical approach to AI policy and governance have intensified. Sociotechnical research emphasises that understanding the efficacy, harms, and risks of AI requires attention to the cultural, social, and economic conditions that shape these systems. Yet development and regulation often remain split between technical research and sociotechnical work rooted in the humanities and social sciences. Bridging not only disciplinary divides but the gap between research and AI regulation, policy, and governance is crucial to building an AI ecosystem that centres human risk. Researchers are frequently urged to “translate” findings for policymakers, with translation framed as a pathway to interdisciplinary collaboration and evidence-based governance. This paper troubles this notion of translation by examining two case studies: the National Institute of Standards and Technology’s US AI Safety Institute Consortium and the Public Technology Leadership Collaborative. We show that spanning research and policy requires more than simplified communication, but depends on building relationships and navigating the uneven terrain where academics, policymakers, and practitioners meet. We conclude with recommendations for existing governmental mechanisms for incorporating researchers into policymaking, which may not be widely known to academics.

Meta Must Rein In Scammers — Or Face Consequences

Article
The Verge
2025-11-14
Authors: Lana Swartz, Alice E. Marwick
Subjects: social media,digital advertising,Facebook,Instagram,scams
Methodology:

Summary:

Advocating with Evidence: Lessons for Tech Researchers in Civil Society

Panel,Video,Event
Center for Democracy & Technology
2025-11-13
Authors: Alice Marwick, Marissa Gerchick, Dhanaraj Thakur, Jordan Kraemer
Subjects: artificial intelligence,democracy,public policy,civil rights,bias,discrimination
Methodology:

Summary:
"As technology and the tech industry evolve, civil society is struggling to address the unintended consequences of change, including tools that can be used to erode democracy, amplify bias, and accelerate hateful ideologies. Timely, accessible findings are necessary to inform policy and advocacy, but it can be challenging to produce research that is simultaneously rigorous and high-impact.

This panel brings together research experts and practitioners who bridge policy and academia to advance civil rights and challenge tech injustice. Drawing on their experience as research leaders, the panelists will discuss how to conduct effective research on the harms of digital technologies and for developing solutions that center marginalized and vulnerable communities."

Tech Life: Web-scraping bots

Audio
BBC
2025-11-04
Authors: Alexandra Mateescu, Zoë West
Subjects: artificial intelligence,labor,labor rights,fashion models
Methodology: podcast

Summary:

Notes Towards a Digital Worker's Inquiry

Book
Common Notions Press
2025-11-04
Authors: The Capacitor Collective
Subjects: artificial intelligence,gig economy,labor,activism,technology industry,labor rights,precarious work
Methodology:

Summary:
"First-hand accounts from the tech sector’s resurgent labor movement as artificial intelligence gains ground in every facet of our lives.

As tech billionaires align with Trump, they are also launching a renewed assault on labor through artificial intelligence and alienating tactics. But for now, it still takes workers to make fortunes for the bosses, and collective action is again on the rise. The rank and file are now coming from precarious new “gig jobs” and drawing strength from a class of worker who does what computers still cannot. Previously thought to be “unorganizable,” these workers are part of a North American movement that is reaffirming faith in collective revolutionary action through new methods of organizing, new ways of association, and a new synthesis of traditional labor activities with original research.

To capture this growing class consciousness, the Capacitor Collective has conducted ten illuminating interviews with platform workers and organizers whose efforts align traditional motives with new tactics in a text that shakes up the worker inquiry tradition and imagines new ways to produce knowledge with and for the movement."--publisher's description

AI Governance Under Pressure: Regulation, Risk, and the Race to Deploy

Panel,Video
All Tech is Human
2025-11-03
Authors: John Hearty, Bilva Chandra, Amba Kak, Janet Haven
Subjects: artificial intelligence,governance of artificial intelligence,government regulation,public policy,research data management
Methodology:

Summary:
Taking part in the AI governance panel at All Tech Is Human's Responsible Tech Summit, D&S Executive Director Janet Haven talked about the critical role that grounded, empirical research on AI plays in creating policy that reflects the technology’s real impacts on people.

On arXiv, an Influx of AI Slop Pits Surface Against Substance

Blog post
Data & Society Points
2025-10-30
Authors: Ranjit Singh
Subjects: artificial intelligence,large language model,research ethics,AI slop
Methodology:

Summary:

A chatbot for the soul: mental health care, privacy, and intimacy in AI-based conversational agents

Academic article
Communication and Change
2025-10-29
Authors: Tamara Kneese, Briana Vecchione, Alice Marwick
Subjects: artificial intelligence,impact assessment,algorithmic impact assessment,teletherapy,mental health,chatbot
Methodology: qualitative methods,participatory methods,focus groups,impact assessment,red-teaming

Summary:
Abstract: Artificial intelligence-based conversational agents—chatbots—are increasingly integrated into telehealth platforms, employee wellness programs, and mobile applications to address structural gaps in mental health care. While these chatbots promise accessibility, they are often deployed without sufficient impact assessment or even basic user testing. This paper presents a case study using community red-teaming exercises to evaluate a chatbot designed for wellness and spirituality. Unlike traditional red-teaming, which is conducted by engineers to assess vulnerabilities, community red-teaming treats impacted users as experts, uncovering concerns related to privacy, ethics, and functionality. Our fieldwork, conducted with undergraduate beta testers (n = 28), revealed that participants were often more comfortable sharing private information with the chatbot than with a stranger. Prior experience with commercial AI systems, such as ChatGPT, contributed to this ease. However, participants also raised concerns about misinformation, inadequate guardrails for sensitive topics, data security, and dependency. Despite these risks, users remained open to the chatbot’s potential as a spiritual wellness guide. We further examine how algorithmic impact assessments (AIAs) both capture and overlook key aspects of the spiritual chatbot user experience. These chatbots offer hyper-personalized, AI-mediated divination and wellness interactions, blurring the boundaries between astrology, mental health support, and spiritual guidance. The chameleon-like nature of these technologies challenges conventional assessment frameworks, necessitating a nuanced approach that considers specific use cases, potential user bases, and indirect impacts on broader communities. We argue that AIAs for care and wellness chatbots must account for these complexities, ensuring ethical deployment and mitigating harm.

The Uses and Limits of Algorithmic Impact Assessments

Blog post
Data & Society Points
2025-10-29
Authors: Meg Young, Tamara Kneese
Subjects: public policy,impact assessment,algorithmic impact assessment,algorithms
Methodology:

Summary:
Our method for conducting community-based algorithmic impact assessments. Explore our extensive toolkit, documentation of our pilots, and a series of reflections on lessons learned.

A Roadmap for Rewiring Democracy in the Age of AI

Panel,Video,Event
Data & Society Research Institute
2025-10-23
Authors: Nathan E. Sanders, Bruce Schneier, Alice Marwick
Subjects: artificial intelligence,democracy
Methodology:

Summary:
Democracy faces challenges around the world, and artificial intelligence is further compounding them. In their book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, cybersecurity technologist Bruce Schneier and data scientist Nathan E. Sanders catalog the ways that AI is changing democracy and make the case that we can harness the technology to support and strengthen democracy in turn. Neither fear-mongering nor utopian, Rewiring Democracy aims to present a clear-eyed and optimistic path for putting democratic principles at the heart of AI development — highlighting how citizens, public servants, and elected officials can use AI to expand access to justice and inform, empower, and engage the public.

The authors discussed their book with Data & Society’s Director of Research Alice Marwick, and walked us through their roadmap for understanding how AI is changing power and participation and what we can do to shape that change for the better.

Spiritual Chatbots in Uncertain Times

Article
ASAP Review
2025-10-21
Authors: Tamara Kneese, Briana Vecchione
Subjects: artificial intelligence,generative artificial intelligence,large language model,algorithms,chatbot
Methodology:

Summary:

Made with Baserow