Marshall Steinbaum’s Viral Claim: Removing Em‑Dashes from Claude Output Becomes a Daily Task for a University of Chicago Economist
AI‑Assisted Writing Quietly Redefines Scholarly Work
The rapid adoption of artificial‑intelligence tools for drafting written material has begun to reshape even the most specialized fields of research. The shift is not presented as a triumph of efficiency alone; rather, it surfaces a new set of laborious tasks that were previously invisible to the academic community. A recent social‑media post by a University of Chicago economist has thrown a spotlight on the quotidian nature of these emerging responsibilities.
Marshall Steinbaum, a PhD economist affiliated with the University of Chicago, posted a concise statement on a public micro‑blogging platform that quickly captured the attention of scholars, technologists, and commentators alike. The core of the message conveyed that, despite holding a doctorate in economics from the University of Chicago, the dominant element of daily work now involves meticulous editing of output produced by Claude, a large‑language model. Marshall Steinbaum framed the task as the removal of em‑dashes—punctuation marks that frequently appear in natural prose—to conceal the unmistakable fingerprint of an algorithmic author.
In the original post, Marshall Steinbaum wrote, “I have a PhD in economics from the University of Chicago and my main work task these days is removing em‑dashes from Claude output so it’s not overly obvious.” The brevity of the statement belied a deeper narrative about the evolving expectations placed on highly trained professionals who must now act both as subject‑matter experts and as human curators of machine‑generated content.
Why Em‑Dashes Matter in the Context of AI‑Generated Text
Em‑dashes occupy a distinctive niche in written English. Their presence can signal a writer’s willingness to interject clauses, create dramatic pauses, or embed nuanced commentary—all characteristics that lend a text a sense of organic flow. When large‑language models like Claude produce drafts, the prevalence of em‑dashes can become a subtle indicator that the material originated from a non‑human source. Consequently, the removal of these punctuation marks transforms a text that might otherwise betray its machine provenance into a smoother, ostensibly human‑crafted narrative.
Marshall Steinbaum’s emphasis on this particular editing step underscores a broader concern: readers, whether professors, policymakers, or the general public, often possess an intuitive sense for the stylistic quirks that differentiate human authorship from algorithmic output. By excising em‑dashes, Marshall Steinbaum attempts to neutralize a visual cue that could otherwise raise questions about credibility, authorship, or the authenticity of the underlying analysis.
Community Reaction: Humor, Critique, and Reflection
The response to Marshall Steinbaum’s post unfolded across several layers of discourse. On one hand, many individuals responded with humor, riffing on the absurdity of dedicating professional time to a seemingly trivial typographic adjustment. On another, a segment of the audience offered pointed critique, recognizing that the very need to perform such a task reveals a systemic tension between the efficiency promised by AI and the meticulous standards required by academic publishing.
One commentator noted that the observation “highlights the broader implications of AI integration in scholarly workflows,” suggesting that the phenomenon could be a harbinger of a new class of editorial labor. Others invoked the stereotype of University of Chicago economics graduates, joking that “Chicago PhDs often hide complex ideas behind dense formulas and appendices, and now they must also hide the fact that those ideas were partially drafted by a machine.”
Further extending the conversation beyond economics, a PhD holder in electrical engineering shared a parallel experience: editing AI‑generated corporate reports and presentation decks has become a routine element of the professional schedule. This anecdote reinforced the notion that the practice of “de‑AI‑ing” text is not confined to a single discipline but is spreading across a spectrum of technical and non‑technical fields.
Amidst the lighthearted banter, a subset of participants raised substantive questions about originality, authenticity, and the ethics of presenting machine‑assisted writing as purely human output. A user suggested creating a configuration file—referred to as a “Claude.md” file—to program Claude not to generate em‑dashes in the first place, thereby eliminating the need for manual post‑processing. This suggestion hinted at a growing awareness that the workflow could be optimized at the prompt‑engineering level rather than relying on downstream human correction.
Another voice emphasized the perceptual aspect of punctuation, stating, “You’re removing em‑dashes so that casual readers won’t think it’s AI, even though any serious reader would encounter such punctuation in standard prose.” The sentiment articulated a paradox: the very act of sanitizing the text for human consumption acknowledges the existence of an underlying machine process, thereby making the concealment effort itself a marker of AI involvement.
Further dialogue explored the root cause of the issue. One participant argued that the problem lies within the prompting strategy, advising that “just tell Claude not to use em‑dashes.” This perspective suggests that the solution could be as straightforward as refining the instruction set provided to the language model, thereby aligning output more closely with desired stylistic conventions.
Yet another user highlighted a cultural dimension, noting, “The annoying part of this is that you’re removing em‑dashes from your work so that people who hardly ever read won’t think it’s an AI. When, in fact, if you read any non‑fiction written by humans of any reputable prose style you’ll find em‑dashes on the first page.” This observation underscores the tension between editorial standards aimed at a broad audience and the conventions that seasoned readers expect from high‑quality writing.
Implications for Academic Labor and Future Practices
The viral spread of Marshall Steinbaum’s statement has catalyzed a broader examination of how AI tools are reshaping the division of labor within academia. Historically, scholars have allocated the majority of their professional time to hypothesis formation, data collection, analytical modeling, and the synthesis of findings. The emergence of sophisticated text generators adds a new, albeit invisible, layer to this workflow: the need for human editors to sanitize, contextualize, and humanize machine‑produced drafts.
From a practical perspective, the time spent removing em‑dashes represents a microcosm of a larger set of editorial interventions. These may include adjusting tone, ensuring logical coherence, verifying factual accuracy, and aligning the document with disciplinary citation styles. While the removal of a punctuation mark appears trivial, it exemplifies the broader reality that AI‑generated content rarely arrives ready for publication without a substantial human touch.
Marshall Steinbaum’s experience also raises questions about the skill set required for future economists, engineers, and scholars. In addition to domain expertise, proficiency in prompt engineering, familiarity with AI model behavior, and the ability to perform nuanced textual revisions may become essential components of a scholar’s toolkit. Educational programs may need to adapt curricula to incorporate training on AI‑assisted writing, ethical considerations, and best‑practice guidelines for post‑generation editing.
The discourse surrounding Marshall Steinbaum’s post suggests that the academic community is at a crossroads. On one side lies the promise of accelerated draft production, broader access to writing assistance, and the potential for interdisciplinary collaboration facilitated by AI. On the other side stands the risk of over‑reliance on machine output, potential erosion of scholarly voice, and the hidden labor required to restore a human veneer to AI‑generated prose.
Looking Ahead: Balancing Efficiency with Authenticity
As the conversation continues, a recurring theme emerges: the balance between leveraging AI for efficiency and preserving the authenticity that readers expect from scholarly work. Marshall Steinbaum’s anecdote about em‑dash removal encapsulates this dilemma in a concrete, relatable form. The solution may involve a combination of refined prompting strategies, model fine‑tuning, and transparent disclosure practices that acknowledge the role of AI in the authoring process.
One prospective approach is to embed style constraints directly into the prompt, instructing Claude to adopt a specific punctuation convention from the outset. By doing so, the need for downstream editing of em‑dashes could be minimized, allowing scholars like Marshall Steinbaum to focus more on substantive content rather than typographic sanitization.
Another avenue involves developing post‑processing tools that automatically adjust punctuation according to predefined style guides. Such tools could be integrated into existing word‑processing pipelines, thereby reducing the manual effort currently required by professionals across disciplines.
Ultimately, the dialogue sparked by Marshall Steinbaum’s viral confession highlights an evolving reality: artificial intelligence is not merely a peripheral aid but an integral component of the modern scholarly ecosystem. The challenge for the academic community, and for institutions like the University of Chicago, will be to harness this technology responsibly while safeguarding the distinctive human elements that lend credibility, nuance, and depth to research communication.
Marshall Steinbaum’s experience serves as both a cautionary tale and a catalyst for ongoing discussion about the future of academic labor, the ethical dimensions of AI‑assisted writing, and the practical steps necessary to ensure that the integration of technology enhances rather than diminishes the rigor and authenticity of scholarly output.









