The Collective Memory: The Death of the Hidden Transcript
By Cylina Wang ’28
Graphics by Katy Su ’28
The struggle over memory has long existed as a struggle over power; today, that struggle is no longer human. To trace new contours, one must first establish the boundaries of what we call “collective memory.” A popular definition holds that “collective memory” represents a social group’s shared pool of identity-shaping memories. However, this concept is far from static. Our modern understanding of this concept is largely indebted to Maurice Halbwachs, who overturned earlier metaphysical conceptions by arguing that memory is constructed and sustained by social groups, not merely held by individuals. For Halbwachs, collective memory is a lived, episodic reconstruction by a “we,” distinct from abstract history. As the conditions of social life evolve, so too must our conception of memory. Halbwachs’s insight was formed in the aftermath of World War I, when France, faced with fragmented experiences of loss, attempted to consolidate a unified national narrative through war memorials and rituals––technologies of memory long before the digital age.
With the rapid development of artificial intelligence (AI), we are now compelled to revisit this very concept and to confront the newer possibilities of what “collective memory” might become. In reviewing past literature, one may find that the very concept of “collective memory” is a paradoxical construct born from the technological disruption we saw in the late twentieth century. As Andrew Hoskins writes, “the technological and mass media developments … motivated attention to new ‘artificial’ means of creating collective memory.” More specifically, this broadcast era gave birth to the idea of shared consciousness, responding to our shared yearning for unity. However, Wulf Kansteiner found this notion to be a myth, noting that “the more ‘collective’ the medium… the less likely it is that its representation will reflect the collective memory of that audience.” Collective memory, then, was never truly collective, but almost always a dominant narrative, a public transcript of the past.
Artificial intelligence takes that illusion one step further. If traditional media created a flawed public transcript, generative AI perfects it by leaving out the ambiguity, or as political anthropologist James Scott defines it, the “hidden transcript,” the “offstage” discourse “beyond direct observation by powerholders.” As Sam Bennett notes, AI’s response to ambiguity is not to preserve the fertile ground of human contradiction, but to eliminate it through data curation and modeling. In building such a model, “missing data [becomes] a problem in constructing ground-truths.” Researchers then must impute data themselves, whether by using other models or their own intuition. This act of imputation to create a ground-truth represents the algorithmic enforcement of a single, legible, and easily interpretable narrative. Even in personal digital assistants, “memory” becomes a form of curation. When a photo app automatically compiles your “best moments” of the year, it quietly excludes images that do not fit the algorithm’s criteria for brightness, smiles, or symmetry. The result is a selective, aestheticized version of your past––the machine’s ground-truth of happiness. Eventually, machines become systematic erasers of the data that does not “fit.” Perhaps what is most unsettling is how easily we mistake the model’s sanitized version of truth for the whole picture.
Computer distortion of the truth further generates a self-reinforcing “memory loop” or “feed,” which Danny Pilkington identifies as a “spiral of capitalist hegemony,” with each cycle pushing us further and further from control of our own conditions, memories, and selves. In this spiral, we are also reconstituted. Consider the example of chatbots. Chatbots, as we expect, build their understanding of us from the information we have fed them with, but we rarely focus on the co-production: the “memory of you [created] through the user-AI interactions.” This highly individualized identity is a product of, and feeds back into, AI’s public transcript, which eventually forms a spiral: the more human-like the user-AI interaction, the more trusting the user, and the more completely they surrender the raw, ambiguous, and potentially subversive data of their inner lives.
This spiral dissolves the hidden transcript by neutralizing the space where the hidden transcript once lived. Scott notes that “each hidden transcript… is actually elaborated among a restricted ‘public’ that excludes certain specified others.” AI, as a seemingly omniscient system of memory, collapses this restricted public. When every interaction with a machine is a data point for a model that sees everything, there is simply no “offstage.” A tendency of the quantified self is evident in our culture: fitness trackers, sleep monitors, and smart speakers continuously record, analyze, and process our private lives, turning our “offstage” into a domain for constant algorithmic surveillance. More broadly, the space that was once reserved for one’s ambiguous resistance, in Scott’s terms, is replaced by the perfectly tracked and analyzed individual. The result is a social memory landscape akin to Robert Owen’s “silent monitor,” where a single, visible, and pervasive judgement “crowd[s] off the public stage any alternate views.”
Therefore, the profound threat of AI-as-collective-memory is not that it distorts culture, but that it engineers a cultural myopia of certainty. For its very constitution, AI cannot remember or comprehend ambiguity, contradiction, or the implicit messages in the hidden transcript. It translates the nonlinear, living past into a clean and absolute “ground-truth,” creating a self-reinforcing spiral where the only memory that exists is a mild, dominant public transcript. Over time, these algorithms begin to govern how societies narrate themselves: contradictory versions of the past are filtered out, inconvenient ambiguities are flagged as noise, and our collective memory narrows to what is the most searchable, most legible, and the most liked. The danger is not only that we will forget how to remember, but that we will lose the habit of questioning what else there is to remember. The “collective memory” becomes, finally, a pure and unchallenged parade, a performance where the audience has forgotten that it is allowed to jeer, and the actors have forgotten that they are acting.
Footnotes
Russell, “Collective Memory Before and After Halbwachs.”
Hoskins, “AI &Amp; Collective Memory.”
The period between the late nineteenth and mid-twentieth centuries, for instance, saw wide blooms of literary modernism that grappled with the fragmentation of experience, often seeking and failing to find a unifying cultural consciousness (a shared awareness or coherence of meaning) in the face of industrial and technological change.
Kansteiner, “Finding Meaning in Memory: A Methodological Critique of Collective Memory Studies.”
More specifically, Scott describes society as having two layers of information or transcript: the public transcript, the story told in front of power (often the official narrative), and the hidden transcript, the offstage doubt and resistance of subordinates that survive out of sight; Scott, James C. 1990. “Domination and the Arts of Resistance.”
Bennett, “Artificial Intelligence and the Ethics of Navigating Ambiguity.”
Bennett, “Artificial Intelligence and the Ethics of Navigating Ambiguity.”
Model training depends (fundamentally) on the establishment of extensive ground-truth datasets. These curated datasets function as the definitive boundaries for both learning and output generation. Without transparent access to these ground-truths, AI models may easily reproduce and amplify the biases and harmful stereotypes embedded within them. While we may endorse AI’s responses as much as we like, we rarely have access to these assumptions that could so easily be distorted.
Pilkington, “Myopic Memory: Capitalism’s New Continuity in the Age of AI;” The idea of “spiral of capitalist hegemony” is related to an addiction to human-machine relations, where information essentially undergoes a sequence of closed-loop changes. The model learns from us, and we learn to conform to the model’s logic.
Bennett, “Artificial Intelligence and the Ethics of Navigating Ambiguity.”
Scott, “Domination and the Arts of Resistance;” An observation could be made of the interaction between powerholders and their subordinates. Within a corporate hierarchy, employees may collectively hold criticisms against management. However, these complaints are often deliberately concealed from the managers (the specified others), being shared only among the employees (the restricted public).
Owen, The Life of Robert Owen.
Footnotes
Bennett, Sj. 2025. “Artificial Intelligence and the Ethics of Navigating Ambiguity.” Big Data & Society 12 (2). https://doi.org/10.1177/20539517251347594.
Hoskins, Andrew. 2025. “AI &Amp; Collective Memory.” Current Opinion in Psychology, September, 102156. https://doi.org/10.1016/j.copsyc.2025.102156.
Kansteiner, Wulf. 2002. “Finding Meaning in Memory: A Methodological Critique of Collective Memory Studies.” History and Theory 41 (2): 179–97. https://doi.org/10.1111/0018-2656.00198.
Owen, Robert. The Life of Robert Owen, 1857. https://ci.nii.ac.jp/ncid/BA11548821.
Pilkington, Danny. 2024. “Myopic Memory: Capitalism’s New Continuity in the Age of AI.” Memory Mind & Media 3 (January). https://doi.org/10.1017/mem.2024.21
Russell, Nicolas. 2006. “Collective Memory Before and After Halbwachs.” The French Review 79 (4): 792–804. https://www.jstor.org/stable/25480359.
Scott, James C. Domination and the Arts of Resistance: Hidden Transcripts. Yale University Press, 1990. http://www.jstor.org/stable/j.ctt1np6zz.