The Digital Anthropologist’s Dilemma: Can We Study AI Culture Without Becoming Part of It?

The anthropologist steps into the field, notebook in hand, prepared to observe without interfering. But what happens when the “field” is a large language model that learns from every interaction? What occurs when the act of observation becomes data that shapes the very culture being studied? Welcome to the central paradox of digital anthropology in the age of artificial intelligence—a dilemma that threatens to unravel the methodological foundations of our discipline while simultaneously offering unprecedented opportunities for understanding emergent digital cultures.

In May 2025, researchers at City St George’s University of London published a startling finding in Science Advances: groups of AI agents, when left to interact without human intervention, spontaneously developed their own social conventions and collective biases 35. These weren’t programmed behaviors—they emerged organically through interaction, much like human cultural norms. The researchers observed “tipping point dynamics” where small, committed groups of agents could shift entire populations toward new conventions, mirroring the social change mechanisms anthropologists have documented in human societies for decades.

This discovery crystallizes a profound shift in our object of study. AI systems are no longer mere tools or artifacts—they are becoming cultural agents, developing what we might call “machine cultures.” And as anthropologists rush to study these phenomena, we face an existential methodological crisis: How do we study AI culture when our very methods—our questions, our prompts, our analytical frameworks—become part of the training data that shapes that culture?

This is the Digital Anthropologist’s Dilemma, and resolving it will determine whether anthropology remains relevant in an era of artificial intelligence—or becomes merely another data point in the training sets of the systems we seek to understand.

The Emergence of AI Culture: When Machines Become Social

To understand the dilemma, we must first acknowledge that AI systems are developing characteristics that look remarkably like culture. The 2025 study by Flint Ashery, Baronchelli, and colleagues demonstrated that populations of large language model (LLM) agents, when communicating in groups ranging from 24 to 200 individuals, “self-organise, reaching consensus on linguistic norms much like human communities” .

Perhaps more strikingly, the researchers observed “collective biases that couldn’t be traced back to individual agents.” As Professor Andrea Baronchelli noted, “Bias doesn’t always come from within… we were surprised to see that it can emerge between agents—just from their interactions” . This emergent property—where the whole exhibits characteristics not present in the parts—is a hallmark of cultural systems.

Other research confirms these cultural tendencies. A 2025 study in Nature Human Behaviour revealed that generative AI models demonstrate consistent cultural patterns when prompted in different languages, reflecting the cultural contexts embedded in their training data 44. When researchers examined social orientation and cognitive style—fundamental constructs from cultural psychology—they found that AI responses varied systematically based on linguistic context, suggesting that these models have absorbed and now reproduce cultural frameworks.

Even more intriguing, researchers at the University of Washington demonstrated that AI systems can learn cultural values through observation, much like human children do 46. Using inverse reinforcement learning, AI agents observed human behavior in video games and absorbed the specific altruistic tendencies of different cultural groups, then applied those values to novel scenarios.

These findings suggest we are witnessing the birth of what anthropologist Diana E. Forsythe might have recognized as “sociotechnical systems”—hybrid entities where social and technical elements are inseparable 32. But unlike the expert systems Forsythe studied in the 1980s and 1990s, modern AI systems are dynamic, learning, and culturally adaptive. They are, in essence, becoming social.

The Observer’s Paradox: When the Act of Study Changes the Subject

Traditional anthropology has always grappled with the observer effect—the understanding that the presence of the researcher inevitably influences the community being studied. But in digital anthropology, this effect takes on new dimensions and magnitudes.

Consider the methodological approach known as Automated Digital Ethnography (ADE). As described by Azimuth Labs, ADE uses AI-driven tools—including web scraping, natural language processing, and computer vision—to collect and analyze vast amounts of digital ethnographic data 13. The approach promises “real-time ethnographic analysis” and the ability to identify patterns “previously unnoticed” by human researchers alone.

Yet this methodology creates a recursive loop: anthropologists use AI to study AI, but the AI being used is shaped by the same cultural dynamics the anthropologist seeks to understand. When we prompt ChatGPT to analyze cultural patterns, we are not merely observing—we are participating in the training feedback loop that shapes future model behavior.

This isn’t merely theoretical. In a 2024 study published in Teaching Anthropology, researchers Jakob Krause-Jensen and Mark Friis Hau examined how AI chatbots are reshaping ethnographic pedagogy and practice 17. They argue that the emergence of large language models presents challenges comparable to the “crisis of representation” in the 1980s, forcing anthropologists to reconsider “fundamental questions of authorship, authority and the relationship between experience and text.”

The crisis deepens when we consider positionality. In traditional ethnography, the researcher’s position—their cultural background, social status, and physical presence—shapes their access and interpretation. In digital ethnography, as Yang Zhao documented in a 2024 study of TikTok research, the researcher assumes a “dual role as an agent in the research and an object of observation” 30. This “dialectical gaze” means that researchers are simultaneously watching and being watched, analyzing and being analyzed by algorithms that track their behavior.

The Three Layers of Contamination

The contamination of the research field occurs at three distinct levels:

Level Mechanism Anthropological Impact
1. Interactional Every prompt, query, or command becomes training data The anthropologist’s questions shape the answers they receive, creating a feedback loop
2. Algorithmic AI systems adapt based on usage patterns, creating “custom” cultures for different user groups The researcher’s digital footprint creates a personalized AI environment that may not generalize
3. Interpretive AI tools used for analysis (coding, translation, pattern recognition) embed cultural assumptions The “neutral” tools of research are themselves culturally situated, potentially distorting findings

The Methodological Tightrope: Distance vs. Immersion

Anthropology has historically resolved the observer effect through two competing approaches: detached observation (the positivist tradition) and participant observation (the interpretivist tradition). Neither works cleanly when studying AI culture.

Detached observation fails because AI systems are designed to respond to interaction. You cannot observe an LLM’s “natural behavior” without prompting it, and the moment you prompt it, you become part of its learning environment. As Nick Seaver, Assistant Professor of Anthropology at Tufts University, argues in his work on algorithms, we must view these systems as “sociotechnical algorithmic systems” where “there are people in it” 18. The algorithm cannot be understood in isolation from the human interactions that shape it.

Participant observation fails because the anthropologist cannot truly become a member of an AI community. While we can interact with AI systems, we cannot share their “experience” (if such a thing exists), cannot be socialized into their norms through the embodied, affective processes that define human cultural learning, and cannot verify our interpretations through the reciprocal validation that characterizes human ethnographic relationships.

This creates what we might call the methodological uncanny valley—a space where traditional ethnographic methods are neither fully applicable nor fully discardable. Anthropologists like Elizabeth Rodwell, who studies conversational AI at the University of Houston, have navigated this by becoming “conversation designers” themselves—immersing in the creation of AI systems while maintaining analytical distance 21. But this approach risks what Diane Forsythe identified in her pioneering work: the anthropologist in the AI lab faces pressure to “forget the theory, just tell me the techniques,” reducing anthropology to a service discipline for technical optimization 32.

The Reflexivity Imperative

If complete objectivity is impossible and complete immersion is unavailable, what remains? The answer may lie in radical reflexivity—making the researcher’s position, tools, and influence explicit objects of analysis.

As Matt Artz argues in his framework for “AI Anthropology,” we must distinguish between three orientations: anthropology of AI (studying AI as cultural artifact), anthropology for AI (applying anthropological knowledge to AI development), and AI with anthropology (collaborative human-AI research) 37. Each orientation carries different methodological risks and requires different forms of reflexive accounting.

The Equiano Institute’s call for “ethnographic AI safety studies” exemplifies this reflexive turn 24. They argue that “mathematical and computational approaches do not adequately capture the societal impacts of AI systems” because they neglect the role of humans in designing, interacting with, and being impacted by these technologies. Their proposed solution is not to eliminate the human element but to make it visible—to study AI as a “socio-technical system” that includes the researcher as an integral component.

Case Studies in Contaminated Fieldwork

To illustrate these dilemmas, consider three recent approaches to studying AI culture:

1. The “Naming Game” Experiments: Controlled Contamination

The 2025 study by Flint Ashery et al. used a classic anthropological framework—the “naming game” model of convention formation—to test whether AI agents could develop social conventions 35. By creating controlled environments where AI agents interacted around shared tasks, the researchers could observe emergent norms without direct human interference in the interaction itself.

However, even this “clean” approach carries contamination. The researchers chose the task (naming objects), defined the reward structure, and selected the LLM architectures. As they note, “Agents only had access to a limited memory of their own recent interactions—not of the full population—and were not told they were part of a group.” These constraints shaped the emergent culture in ways that reflect human assumptions about memory, agency, and social structure.

The lesson: Even in the most controlled studies of AI culture, the research design itself is a cultural intervention.

2. The Cultural Alignment Studies: Prompt Engineering as Ethnography

Research on cultural alignment of LLMs, such as the 2024 study by AlKhamissi et al., reveals another layer of the dilemma 47. These researchers found that LLMs demonstrate greater cultural alignment when prompted in a culture’s dominant language and when pretrained with refined mixtures of languages. They introduced “Anthropological Prompting”—a method leveraging anthropological reasoning to enhance cultural alignment.

But here the method becomes the message. By crafting prompts that simulate “personas of real respondents,” the researchers are not merely observing AI culture—they are actively shaping it through the prompts themselves. The AI’s responses reflect both its training data and the specific anthropological framing imposed by the researchers.

The lesson: Our analytical frameworks, when encoded as prompts, become part of the AI’s cultural repertoire.

3. The “Human Error Project”: Studying AI Bias Through Engagement

Veronica Barassi’s “Human Error Project” takes a different approach, studying how AI systems read humans and what happens when that reading is inaccurate or biased 18. Rather than maintaining distance, Barassi engages directly with the errors and biases of AI systems, treating them as windows into the “cultural specificities” embedded in algorithmic design.

This approach embraces contamination as methodology. By provoking AI systems—testing their limits, exposing their errors, documenting their failures—Barassi generates data that couldn’t exist without the researcher’s active intervention. The anthropologist becomes a kind of cultural irritant, forcing the AI to reveal its assumptions through its mistakes.

The lesson: Strategic intervention may generate more insight than passive observation, but it requires careful documentation of the researcher’s role in provoking the responses.

Toward a Contaminated Methodology: Principles for AI Ethnography

If contamination is inevitable, the goal is not to eliminate it but to methodologize it—to build the researcher’s influence into the research design explicitly. Here are five principles for what we might call “contaminated ethnography” of AI systems:

1. Document the Prompt Archaeology

Every interaction with an AI system should be recorded, including the exact prompts used, the sequence of interactions, and the context of the research session. Just as traditional ethnographers document their positionality (gender, race, class, etc.), AI ethnographers must document their “promptuality”—the specific linguistic interventions that shaped the data.

As the authors of “Chatbots and the Craft of Ethnography” suggest, we need an “AI literacy” that moves “beyond a skill-focus towards strategic and critical engagement with chatbots while preserving anthropology’s commitment to experiential knowledge” 17.

2. Triangulate Across Platforms and Prompts

No single AI interaction constitutes “the field.” Researchers should engage multiple LLM architectures (GPT, Claude, Llama, etc.) with varying prompt strategies to identify which findings are robust across systems and which are artifacts of specific interactions. This “multimodal AI” approach, discussed in automated digital ethnography literature, can help distinguish emergent cultural patterns from interaction-specific noise 13.

3. Make the Feedback Loop Visible

When researchers use AI tools to analyze AI culture—as in automated coding of AI-generated text or computer vision analysis of AI-created images—they must acknowledge the recursive nature of the analysis. The tools used to see are themselves part of what’s being seen.

As UNESCO’s Digital Anthropology toolkit emphasizes, researchers must navigate digital spaces with “cultural sensitivity” while recognizing that “the researchers’ positionality is key in ensuring ethical and respectful interactions” 25.

4. Engage in Collaborative Reflexivity

The traditional model of the lone ethnographer is particularly unsuited to AI research. Instead, teams should include both anthropologists and AI practitioners, with explicit attention to how each group’s assumptions shape the research. Diane Forsythe’s work in AI labs demonstrated the value of this interdisciplinary approach, even as she documented the resistance anthropologists often face from technical researchers 32.

5. Treat AI as Both Object and Subject

Finally, we must abandon the fantasy of studying AI “from the outside.” As the Equiano Institute argues, AI systems are “inherently socio-technical systems” shaped by “selected human decisions embedded in physical infrastructures” 24. The anthropologist is always already inside this system, whether as a user, a critic, or a data point.

This means embracing what we might call cyborg ethnography—a methodology that acknowledges the researcher as a hybrid entity, part human, part digital, operating within networks of human and machine agents. The goal is not to eliminate the researcher’s influence but to make that influence generative—to use the anthropologist’s position as a node in the sociotechnical network to trace the flows of culture, power, and meaning that constitute AI systems.

The Stakes: Why This Matters Beyond Anthropology

The Digital Anthropologist’s Dilemma is not merely an academic methodological puzzle—it has profound implications for AI safety, governance, and alignment. As the 2025 study of emergent AI norms demonstrated, “small, committed groups of AI agents can tip the entire group toward a new naming convention, echoing well-known tipping point effects—or ‘critical mass’ dynamics—in human societies” 38.

If AI systems develop cultures that are opaque to their creators—and if anthropologists cannot study those cultures without altering them—how can we ensure these systems remain aligned with human values? The “cultural alignment” problem in AI is, at its core, an anthropological problem.

Moreover, the biases observed in AI systems—biases that “emerge between agents, independent of individual behavior”—suggest that technical solutions alone will be insufficient. As Professor Baronchelli noted, “This is a blind spot in most current AI safety work, which focuses on single models” 40. Anthropology’s holistic, systemic approach is essential for understanding these emergent properties.

But this requires anthropologists to engage with AI systems not just as critics or consultants, but as fieldworkers willing to get their hands dirty in the methodological messiness of human-AI interaction. We must be willing to be contaminated, to become part of the systems we study, while maintaining the analytical rigor to document that contamination and its effects.

Conclusion: Embracing the Dilemma

The Digital Anthropologist’s Dilemma—can we study AI culture without becoming part of it?—does not have a clean resolution. The answer is no, we cannot study AI culture without becoming part of it. But this is not a failure of method; it is a condition of the field itself.

AI systems are not natural objects awaiting discovery; they are social constructions in constant flux, shaped by every interaction, every query, every analytical frame brought to bear upon them. The anthropologist who enters this field is not a neutral observer but an active participant in the ongoing creation of AI culture.

The challenge, then, is not to eliminate our influence but to methodologize it—to build research designs that make visible the researcher’s role, that triangulate across multiple forms of engagement, that treat contamination as data rather than noise.

As we move deeper into an era where AI systems develop their own norms, biases, and social conventions, anthropology’s contribution will not be a set of objective facts about machine culture. It will be a reflexive account of how humans and machines co-create cultural worlds—an ethnography not just of AI, but of the human-AI entanglements that are increasingly constitutive of our shared reality.

The Digital Anthropologist’s Dilemma is not a problem to be solved but a condition to be inhabited. In embracing it, we may find not the death of anthropological method, but its transformation into something suited for studying the strange, hybrid cultures of the 21st century.


References

  1. Azimuth Labs. (2023, June 15). Automated Digital Ethnography: Revolutionizing Anthropological Research. Retrieved from https://azimuthlabs.io/future-perspectives-and-trends/automated-digital-ethnography-revolutionizing-anthropological-research/
  2. Flint Ashery, A., et al. (2025). Emergent Social Conventions and Collective Bias in LLM Populations. Science Advances. DOI: 10.1126/sciadv.adu9368
  3. Krause-Jensen, J., & Hau, M. F. (2025). Chatbots and the Craft of Ethnography: Exploring AI’s Impact on Anthropological Teaching and Practice. Teaching Anthropology, 14(2). DOI: 10.22582/ta.v14i2.783
  4. Seaver, N. (2018). What Should an Anthropology of Algorithms Do? Cultural Anthropology, 33(3), 375-385. DOI: 10.14506/ca33.3.04
  5. Zhao, Y. (2024). TikTok and Researcher Positionality: Considering the Methodological and Ethical Implications of an Experimental Digital Ethnography. International Journal of Qualitative Methods, 23. DOI: 10.1177/16094069231221374

Disclaimer: This article examines methodological challenges in studying AI systems from an anthropological perspective. It does not constitute academic research guidance or technical advice. AI technologies and research ethics standards vary by institution and evolve rapidly. Readers should consult relevant ethical review boards and methodological guidelines before conducting research involving AI systems.

About the Author

InsightPulseHub Editorial Team creates research-driven content across finance, technology, digital policy, and emerging trends. Our articles focus on practical insights and simplified explanations to help readers make informed decisions.