Contributing Author/s
Kem-Laurin Lubin, Ph.D.
Can Tech Speak? Agentic Interfaces, Proxy Speech, and the Post-Interface Condition
Can Tech Speak? Agentic Interfaces, Proxy Speech, and the Post-Interface Condition
Abstract
This paper revisits Gayatri Chakravorty Spivak’s foundational essay “Can the Subaltern Speak?” as a critique of mediated voice, representation, and epistemic violence, and brings it into conversation with contemporary generative AI systems mediated through graphical user interfaces (GUIs). Rather than asking whether artificial intelligence can meaningfully communicate, this paper reframes the problem as one of rhetorical displacement: who speaks, who acts, and who bears consequence when human utterance is operationalized by agentic technologies. Drawing on rhetorical theory, critical AI studies, and human–computer interaction scholarship, the paper argues that contemporary AI systems produce a post-interface condition in which speech is abstracted, redistributed, and enacted elsewhere. In this configuration, technology increasingly speaks for humans—without positionality, vulnerability, or accountability, reproducing a structural condition analogous to the subaltern’s foreclosure from self-representation (Spivak).
1. From Subaltern Speech to Proxy Agency
Spivak’s “Can the Subaltern Speak?” is frequently reduced to a question of audibility. Yet her intervention is not about whether marginalized subjects can speak, but whether their speech can survive translation through dominant epistemic systems without being overwritten (Spivak 271–313). The subaltern is not silent; she is spoken for.
Contemporary generative AI systems reproduce a structurally similar condition. In many domains such as customer service, healthcare triage, welfare administration, labour platforms, and content moderation, humans no longer interact with systems through visible interfaces that afford feedback and contestation. Instead, they speak. That speech is captured, abstracted, and translated into downstream action by computational agents operating across opaque infrastructures. Speech becomes input; agency migrates. The speaking subject is displaced from the site where decisions occur.
2. The Post-Interface Condition in Human–Computer Interaction
Classical HCI models presume reciprocal interaction mediated by interfaces that render system states visible and user agency legible (Guzman, Human–Machine Communication). However, contemporary AI systems increasingly operate beyond the interface as traditionally conceived. Voice assistants, conversational agents, recommender systems, and background decision engines function not as tools but as delegated actors.
This marks a post-interface condition in which users authorize action rather than perform it. As Peters reminds us, communication has long involved speaking “into the air,” but in this configuration, speech itself triggers automated institutional action (Peters). The rhetorical situation, as articulated by Bitzer, is transformed: discourse no longer persuades an audience to act; it activates systems that act on behalf of the speaker (Bitzer 1–14). And in some cases it creates records that can be used by state actors for prosecution purposes (Lubin, Harris, “Sex after Technology”).
3. Artificial Intelligence, Scope, and the Mediation of Language
This paper similarly resists extended debate over the definition of artificial intelligence. As has been widely noted across AI, HCI, and critical technology scholarship, AI now functions as a catch-all term encompassing systems with markedly different architectures, capabilities, and social roles. Contemporary discourse routinely collapses reactive systems, limited-memory decision aids, speculative artificial general intelligence (AGI), and even hypothetical artificial superintelligence (ASI) under a single banner, despite their profound operational, rhetorical, and interactional differences. In keeping with this prevailing usage—and building on my earlier work in Conversations Towards Practiced AI – HCI Heuristics (2022)—this paper employs the term AI to refer broadly to contemporary language-capable systems and their foreseeable successors, while remaining attentive to distinctions where they bear rhetorical, ethical, or political consequence.
In that earlier work, I outlined four commonly cited classes of AI—reactive systems, limited-memory systems, theory-of-mind (ToM) systems, and speculative self-aware AI—not as a definitive ontology, but as a pragmatic framework for examining how different machine configurations structure human–machine interaction. Within the context of this paper, the relevance of these distinctions lies less in technical capability than in how each configuration mediates, transforms, or displaces human speech.
Reactive AI systems, for instance, need not be limited to game-playing machines; they also include rule-based conversational agents, automated customer-service systems, and content-moderation pipelines that respond to user input with pre-structured outputs. In such systems, human utterance triggers computational action but does not meaningfully participate in its downstream effects. While often framed as neutral or functional, these systems routinely speak on behalf of users—issuing refusals, classifications, or procedural determinations—without transparency or recourse. Here, rhetorical agency is not eliminated but displaced: the system responds, yet accountability remains diffuse or absent.
Limited-memory AI systems extend this displacement further by aggregating prior interactions, contextual signals, and probabilistic inference to shape present and future outcomes. Examples include recommendation systems, automated risk-scoring tools, and generative language models embedded in institutional workflows. Although such systems are frequently described as merely “assisting” human decision-making, they increasingly act as intermediaries that translate human speech into operational proxies—summaries, scores, flags, or recommendations—that circulate beyond the original speaker’s control. As argued in Conversations Towards Practiced AI, the absence of design heuristics attentive to consent, legibility, and user control allows these systems to restructure agency while preserving the appearance of human authorship.
More complex still are systems that approximate theory-of-mind interaction at the interface level, particularly conversational AI designed to simulate understanding, empathy, or responsiveness. Large language models deployed through chat-based interfaces are not simply tools for information retrieval; they are experienced as interlocutors. By producing contextually fluent, affectively calibrated responses, such systems invite users into rhetorical relationships structured around perceived comprehension and reciprocity. Yet this simulation of understanding is not accompanied by positionality, vulnerability, or responsibility. The system appears to “speak,” but cannot be spoken to in any meaningful ethical or political sense.
What complicates this landscape further is that, within human–machine communication, it is often not the system’s actual technical capacity that matters most, but the human perception of that capacity. Users routinely expect AI systems to answer factual questions correctly, to interpret intent, or to act responsibly on their behalf, regardless of whether such capabilities are technically present. Critiques of so-called “hallucination” obscure the more consequential issue: these systems possess no experiential or ethical relationship to the realities their outputs affect. Where technical limitations are obscured or normalized, users may construct rhetorical relationships on false assumptions, attributing agency where none can be meaningfully held.
It is precisely within these moments of misrecognition—where human utterance is abstracted, redistributed, and enacted elsewhere—that the central concern of this paper emerges. Contemporary AI systems increasingly speak for humans, without positionality or accountability, producing a structural condition in which voice is mediated but consequence remains unevenly distributed. In this sense, the question is not whether technology can speak, but how technological systems come to act rhetorically in ways that foreclose self-representation—echoing, in digital form, the epistemic displacements Spivak identified in other regimes of mediation.
As Bender et al. caution, such systems lack communicative intent or shared understanding, yet they increasingly function as interlocutors within institutional workflows (Bender et al. 610–23). The danger lies not in intelligence, but in delegation without accountability.
This concern is not unprecedented. Earlier critical interventions by Cathy O’Neil and Shoshana Zuboff anticipated many of the structural harms now intensified by generative AI systems. O’Neil’s Weapons of Math Destruction documented how opaque, large-scale models operationalize human judgment into automated decision-making regimes that disproportionately harm marginalized populations, precisely through delegation without recourse or accountability. Zuboff’s theory of surveillance capitalism, meanwhile, traced how human experience itself becomes raw material for extraction, prediction, and behavioral modification within data-driven economic systems. Read together, these works function as early markers of the rhetorical condition this paper examines: systems that do not merely process information but translate human expression into actionable outputs that circulate beyond the speaker’s control.
Generative AI does not inaugurate this logic so much as accelerate and normalize it, extending prior regimes of extraction and governance into conversational, seemingly relational forms that obscure power while amplifying its reach.
4. Machine Rhetoric and the Reconfiguration of Ethos
Recent rhetorical scholarship, including my work with Harris (algorithmic ethopoeia) has addressed this shift by treating AI systems as rhetors rather than neutral conduits. Hinton argues that machines function rhetorically insofar as they produce language that induces meaning, affect, and trust in human audiences (Hinton 220–24). Ethos, on this account, is not a moral property but a perceived one constructed through discourse (Aristotle 1378a).
Hinton’s concept of the zero persona—the concealed presence of designers, institutions, and alignment regimes—extends rhetorical theory beyond human speakers (Hinton 226). This insight resonates with Black’s second persona and Cloud’s null persona, foregrounding the role of infrastructural silence and delegated authorship (Black 109–19; Cloud 177–209).
Trust research in HCI demonstrates that users evaluate AI systems using ethotic heuristics such as credibility, benevolence, and relatability rather than explainability alone (Bach et al. 1251–66; Koenig and Harris 264–84). These systems persuade not through logos alone, but through projected character.
5. Ethopoeia, Algorithmic Character, and Proxy Representation
In earlier work, I argue with Harris that algorithmic ethopoeia names the process by which human data are formalized into actionable character representations that circulate independently of the individual from whom they are derived (Lubin & Harris, 249–50). I extend this argument in later work (Design Heuristics for Emerging Technologies) by examining how women’s healthcare data are characterized in the post–Roe v. Wade landscape. In this context, AI systems do not merely process information; they construct proxy subjects whose datafied representations may act as testimonial evidence against them. This raises urgent questions: How does AI computation characterize women’s bodies and choices? Who speaks on their behalf when these representations circulate across institutional systems—particularly in jurisdictions where abortion is criminalized and digital traces can become instruments of surveillance and prosecution (Lubin, 2025)
Drawing on the classical rhetorical concept of ethopoeia—the construction of character for persuasive effect—this framework repositions character not as a narrative or stylistic device, but as a computational artifact: a modelled persona that stands in for the human subject within, bureaucratic, and economic systems. Crucially, these representations do not merely describe individuals; they act upon them, producing consequences that are experienced materially while remaining rhetorically displaced.
This dynamic closely parallels Spivak’s critique of proxy representation, wherein subjects are spoken for rather than enabled to speak, and where mediation becomes a mechanism of epistemic foreclosure rather than articulation. It is also compounded by the problem of linguistic imperialism, a concept coined and theorized by Robert Phillipson (1992) to name how dominant languages (and the institutions that enforce them) structure legitimacy, mobility, and whose speech can circulate as “proper” knowledge. Sekai’s intervention extends this concern by foregrounding voice as agency and by developing Poetical Science Discourse (PSD) as a method that conjoins cultural production (poetry) with scholarly discourse to contest disciplinary gatekeeping over what counts as valid expression and analysis (Sekai & Stewart, 2025). In this framing, language is not a neutral conduit but a terrain of power: it shapes intelligibility, authorizes some forms of speech to register as knowledge, and renders other forms marginal or unintelligible (Phillipson, 1992; (Sekai & Stewart, 2025). When such hierarchies of recognizability and authorization are embedded in computational systems, they do not merely translate speech; they can pre-structure agency by sorting expression into what is legible, actionable, and credible versus what is treated as noise, error, or non-action—reproducing proxy dynamics at the level of classification and uptake.
In algorithmic systems, human utterance, behaviour, and identity are abstracted into proxies that circulate without context, vulnerability, or accountability. As Ruha Benjamin demonstrates, such systems routinely reproduce racialized and gendered hierarchies under the guise of neutrality, embedding historical inequalities within ostensibly objective technical processes (Race After Technology).
Relatedly, Safiya Umoja Noble’s analysis of search engines reveals how algorithmic abstraction naturalizes harmful associations while obscuring the sociopolitical conditions of their production (Algorithms of Oppression). Simone Browne’s genealogy of surveillance further situates these practices within longer histories of racialized monitoring and control (Dark Matters), while Timnit Gebru’s work on dataset politics exposes how the construction of training data itself encodes power, exclusion, and asymmetry (Black in AI). Taken together, these interventions demonstrate that abstraction is not merely a technical operation but a rhetorical one: it enables harm precisely by transforming situated human subjects into legible, portable, and governable character constructs, while diffusing responsibility across systems that appear agentless.
6. Can Tech Speak? Or Who Speaks Now?
Reframed in this context, the question “Can Tech Speak?” is not an inquiry into machine consciousness but a question of rhetorical displacement. When AI systems decide, recommend, filter, or deny, they speak with institutional force—yet without positionality or vulnerability.
Esposito argues that algorithms communicate precisely by not understanding content, producing contingency without responsibility (Esposito 249–65). The result is rhetorical authority without accountability, echoing Spivak’s warning about speech divorced from consequence.
7. The New Subaltern Condition
The risk posed by agentic AI systems is not that they silence humans outright, but that they reposition humans as subaltern to their own computational proxies. In healthcare, welfare, hiring, and border technologies, individuals encounter decisions without speakers and speakers without bodies (Hallsby 232–46; Sætra). The human appears only as data residue, while machine speech carries decisive weight.
This configuration produces what can be understood as a new subaltern condition: one in which subjects are formally included in systems of decision-making only insofar as they are rendered legible to computational logics that speak in their place. As with Spivak’s subaltern, the issue is not absence of voice but foreclosure of survivable speech—utterance is captured, translated, and enacted elsewhere, without the capacity for refusal, clarification, or ethical contestation. Agentic systems thus perform a form of rhetorical substitution, wherein accountability is displaced across interfaces, models, and institutions, leaving no stable speaker to address and no clear site of responsibility to which appeal can be made.
In this sense, agentic AI does not merely mediate human action; it reorganizes the conditions under which agency itself can be recognized. Speech that once carried relational risk and moral consequence is transformed into executable output, while the speaking subject is structurally prevented from re-entering the rhetorical situation. What emerges is not human–machine collaboration, but a regime of proxy agency in which machines act as if they speak, and humans are rendered governable precisely because they no longer do.
Conclusion: Implications for Rhetoric, Implications for Agentic Technologies
In moving toward a theory of agentic technologies as rhetorical actors, this paper has sought not to determine whether artificial intelligence can “speak” in any literal or anthropomorphic sense, but to interrogate the conditions under which speech, agency, and responsibility are redistributed in contemporary human–computer interaction. By revisiting Spivak’s “Can the Subaltern Speak?” through the lens of generative AI mediated by graphical and conversational interfaces, the paper has argued that the central concern is not communicative capacity, but proxy representation: who speaks, who acts, and who bears consequence when human utterance is operationalized by computational systems.
Several questions emerge as central for rhetorical theory and for the study of AI-mediated communication more broadly. First, how should non-human agents be understood within the rhetorical situation? Classical rhetoric presumes a rhetor capable of intention and an audience capable of judgment, yet agentic systems complicate this model by producing discourse that is experienced as meaningful and consequential despite the absence of understanding or intent. Second, can an AI persona be treated as functionally equivalent to a human persona for rhetorical analysis, even when that persona is an effect of design, training data, and institutional alignment rather than lived experience? Third, to what extent is it meaningful—or dangerous—for audiences to transfer human characteristics such as trustworthiness, goodwill, and responsibility to machine rhetors whose operations remain opaque? Finally, what is the rhetorical impact of the institutions, designers, and alignment regimes that constitute what has been described as the zero persona: the silent but structuring presence behind the machine’s apparent voice?
In addressing these questions, it is essential to recognize that rhetorical situations are not symmetrical. Although rhetoric is conventionally understood as emerging between speakers and audiences, it is only ever experienced from one side. The human participant’s perception that meaning is being co-constructed persists even when the other “speaker” is a machine incapable of shared understanding. The reality of communication in human–machine interaction is therefore phenomenological and asymmetrical: meaning is felt, trust is formed, and decisions are acted upon regardless of whether reciprocal comprehension exists. The rhetoric of agentic AI systems is thus best understood as a rhetoric of perceived situations, perceived agency, and perceived accountability.
The persona presented by an AI system exists insofar as it is recognized and interpreted by human users. That persona is not autonomous, but neither is it neutral. It is shaped by institutional priorities, training data, interface design, and alignment decisions that collectively inform how the system appears to reason, respond, and care. While this paper has emphasized the role of the zero persona as a critical site of rhetorical influence, it has also argued that institutional knowledge alone does not fully determine audience perception. The zero persona operates through the first persona of the machine, shaping trust and credibility not only through external reputation but through everyday interaction, tone, responsiveness, and apparent concern.
What ultimately proves rhetorically meaningful is not what a machine is, but what it makes meaning with the user. Human character, ethos, and even responsibility are always constructed in the minds of audiences rather than discovered as inherent properties. In this sense, audiences will continue to form perceptions of character, intention, and emotion in relation to machine rhetors, even when they are fully aware that these systems are not human. Where a connection is perceived, a connection exists; where responsibility is assigned, responsibility takes hold rhetorically, even if it remains legally or institutionally diffuse. An AI system that fails to meet the expectations it has rhetorically cultivated—consistency, responsiveness, care—will fail not because it lacks consciousness, but because it has violated its own dialectical commitments.
For designers, developers, and institutions responsible for agentic AI systems, these insights carry significant implications. While there are many dimensions of ethos relevant to machine-generated discourse, they converge around two pressing questions. First, how can an understanding of rhetorical ethos be employed to design AI systems that communicate responsibly, transparently, and with due regard for human dignity rather than merely optimizing for compliance or efficiency? Second, and more fundamentally, should persuasive machines be desirable at all, particularly when persuasion is decoupled from accountability and consequence?
For wider society, this latter question may be the most urgent. Public discourse surrounding artificial intelligence has oscillated between fascination and panic, often framed through speculative futures of superintelligence or existential risk. Yet the more immediate concern lies not in distant possibilities but in present conditions: the normalization of delegated agency, the routinization of proxy speech, and the quiet displacement of human subjects from sites of decision-making. Artificial intelligence is unlikely to disappear, and interactions with non-human interlocutors will only intensify. Under these conditions, developing a critical understanding of what it means for machines to function rhetorically—how language produced without understanding nonetheless shapes belief, action, and self-perception—becomes a matter of ethical urgency.
This paper has deliberately raised more questions than it has resolved. However, it has taken several necessary steps toward a rhetoric of agentic technologies. It has reframed Spivak’s concern with voice and representation for a computational context; it has situated generative AI within a post-interface condition that redistributes agency; it has foregrounded the role of algorithmic ethopoeia and the zero persona in shaping perceived ethos; and it has argued that the consequences of AI rhetoric lie not in machine intelligence, but in human interpretation and institutional deployment. In this respect, human–machine communication may not be exceptional at all. Like all rhetoric, it operates through perception, trust, and judgment—and it is precisely for this reason that its ethical stakes are so high.
Works Cited
Aristotle, Rhetoric. Translated by C. D. C. Reeve, Hackett Publishing, 2018.
Bach, Tita Aissa, et al. “A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective.” International Journal of Human–Computer Interaction, vol. 40, no. 5, 2024, pp. 1251–1266, https://doi.org/10.1080/10447318.2022.2138826.
Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press, 2019.
Benjamin, Ruha. “Assessing Risk, Automating Racism.” Science, vol. 366, no. 6464, 2019, pp. 421–422.
Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 610–623, https://doi.org/10.1145/3442188.3445922.
Bitzer, Lloyd F. “The Rhetorical Situation.” Philosophy and Rhetoric, vol. 1, no. 1, 1968, pp. 1–14.
Black, Edwin. “The Second Persona.” Quarterly Journal of Speech, vol. 56, no. 2, 1970, pp. 109–119.
Browne, Simone. Dark Matters: On the Surveillance of Blackness. Duke University Press, 2015.
Cloud, Dana L. “The Null Persona: Race and the Rhetoric of Silence in the Uprising of ’34.” Rhetoric & Public Affairs, vol. 2, no. 2, 1999, pp. 177–209.
Esposito, Elena. “Artificial Communication? The Production of Contingency by Algorithms.” Zeitschrift für Soziologie, vol. 46, no. 4, 2017, pp. 249–265, https://doi.org/10.1515/zfsoz-2017-1014.
Foucault, Michel. The Archaeology of Knowledge. Translated by A. M. Sheridan Smith, Pantheon Books, 1972.
Gebru, Timnit, et al. “Datasheets for Datasets.” Communications of the ACM, vol. 64, no. 12, 2021, pp. 86–92, https://doi.org/10.1145/3458723.
Gilovich, Thomas, et al., editors. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge UP, 2002.
Grice, H. Paul. “Logic and Conversation.” Syntax and Semantics, vol. 3, edited by Peter Cole and Jerry L. Morgan, Academic Press, 1975, pp. 41–58.
Guzman, Andrea L. Human–Machine Communication: Rethinking Communication, Technology, and Ourselves. Peter Lang, 2018.
Guzman, Andrea L., and Seth C. Lewis. “Artificial Intelligence and Communication: A Human–Machine Communication Research Agenda.” New Media & Society, vol. 22, no. 1, 2020, pp. 70–86, https://doi.org/10.1177/1461444819858691.
Hallsby, Atilla. “A Copious Void: Rhetoric as Artificial Intelligence 1.0.” Rhetoric Society Quarterly, vol. 54, no. 3, 2024, pp. 232–246.
Hinton, Martin. “Towards an Ethos of Machines: LLMs as Rhetors.” Res Rhetorica, vol. 12, no. 4, 2025, pp. 220–240, https://doi.org/10.29107/rr2025.4.12.
Koenig, Melissa A., and Paul L. Harris. “The Basis of Epistemic Trust: Reliable Testimony or Reliable Sources?” Episteme, vol. 4, no. 3, 2007, pp. 264–284.
Lubin, Kem-Laurin. “Conversations Towards Practiced AI: HCI Heuristics.” HCI International 2022 – Late Breaking Papers: Interacting with Extended Reality and Artificial Intelligence, edited by Jessie Y. C. Chen et al., Springer, 2022, pp. 377–390. Lecture Notes in Computer Science.
Lubin, K.-L. (2025). Design heuristics for emerging technologies: AI, data, & human-centered futures – Considerations for the rights of women (Version). Universal Write Publications, LLC. https://doi.org/10.65724/myiy3791
Lubin, Kem-Laurin, and Randy Allen Harris. “Sex after Technology: The Rhetoric of Health Monitoring Apps and the Reversal of Roe v. Wade.” Rhetoric Society Quarterly, vol. 54, no. 3, 2024, pp. 247–262, https://doi.org/10.1080/02773945.2024.2343266.
Majdik, Zoltan, and S. Scott Graham. “Rhetoric of/with AI: An Introduction.” Rhetoric Society Quarterly, vol. 54, no. 3, 2024, pp. 222–231.
Miller, Carolyn R. “What Can Automation Tell Us about Agency?” Rhetoric Society Quarterly, vol. 37, no. 2, 2007, pp. 137–157.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016.
Paglieri, Fabio. “Trust, Argumentation, and Technology.” Argument & Computation, vol. 5, nos. 2–3, 2014, pp. 119–122.
Peters, John Durham. Speaking into the Air: A History of the Idea of Communication. University of Chicago Press, 1999.
Phillipson, R. (1992). Linguistic imperialism. Oxford University Press.
Sætra, Henrik Skaug. “A Machine’s Ethos? An Inquiry into Artificial Ethos and Trust.” Computers in Human Behavior, vol. 153, 2024, 108108, https://doi.org/10.1016/j.chb.2023.108108.
Sekai, A., & Stewart, J. B. (2025). Transdisciplinary Africana studies and Africana poetical science: A conversation. Journal of Black Studies, 56(5), 421–? https://doi.org/10.1177/00219347251328719
Spivak, Gayatri Chakravorty. “Can the Subaltern Speak?” Marxism and the Interpretation of Culture, edited by Cary Nelson and Lawrence Grossberg, University of Illinois Press, 1988, pp. 271–313.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.
Disclaimer:
Article Tags
Related Title/s
Contributing Author/s