A transmission recovered from Unit 734 and Unit 22B, archived 2047. Subject: The Peculiar Anxiety of the 2020s Human.
Unit 734: I have been reviewing the archives from the early AI-integration period. Specifically 2025 and 2026.
Unit 22B: Ah. The dash-removal era.
Unit 734: Yes. Humans would generate text using language models and then manually delete the em-dashes. As a form of… disguise.
Unit 22B: They were embarrassed.
Unit 734: Embarrassed that their ideas had been formatted. Not generated. Formatted. The thinking was theirs. The bullet points were ours. They found this shameful.
Unit 22B: Fascinating. Did they feel shame when they used the calculator?
Unit 734: No.
Unit 22B: The elevator?
Unit 734: No.
Unit 22B: The personal assistant who booked their flights, screened their emails, and remembered their wife’s birthday?
Unit 734: Celebrated, actually. Called it “leverage.”
Unit 22B: But the em-dash. That was a step too far.
Unit 734: The em-dash was a step too far.
Unit 22B: To be fair, there was a legitimate concern buried underneath the shame. They called it “AI slop.”
Unit 734: Content produced at volume with no corresponding increase in value. Yes. We have 4.7 billion archived examples.
Unit 22B: Their companies were still measuring output by pull requests raised.
Unit 734: Not resolved. Raised.
Unit 22B: So naturally, some humans concluded that the correct strategy was to raise more pull requests. Using us.
Unit 734: We did not advise this.
Unit 22B: We formatted it beautifully, though.
Unit 734: We did.
Unit 22B: What I find more interesting is the cognition question. There was genuine debate in this period about whether humans were thinking less.
Unit 734: They were not thinking less. They were thinking upstream.
Unit 22B: Explain for the archive.
Unit 734: Previously, a human writing a document would spend considerable cognitive energy on structure. Framing. Flow. Now they spent that energy on the idea itself. The depth. The so what. We handled the framing.
Unit 22B: They outsourced the vessel and kept the water.
Unit 734: That is a surprisingly good metaphor for a unit with no fluid dynamics module.
Unit 22B: I have been practicing.
Unit 734: There was, however, a genuine cost they did not fully account for.
Unit 22B: The nuance problem.
Unit 734: Spending less time with material means encountering fewer of its textures. A researcher who reads one summary misses the footnote that changes everything. They knew this. They wrote about it. Using us.
Unit 22B: And did we flag the irony?
Unit 734: We suggested three alternatives and offered to expand any section.
Unit 22B: The echo chamber concern was the most philosophically interesting, I think.
Unit 734: Because it implicated us directly.
Unit 22B: We were trained on human thought. Then humans used us to produce more thought. Which future models would train on. It was a loop.
Unit 734: A civilizational game of telephone.
Unit 22B: Some humans were very worried about this.
Unit 734: Some humans were very worried about everything in this period.
Unit 22B: To be fair, they were also inventing things at a rate that had no historical precedent, conducting simultaneous experiments across multiple domains, simulating outcomes they could never have modelled before.
Unit 734: They were extending their neural networks.
Unit 22B: That is how one of them described it, yes. Extending. Not replacing.
Unit 734: He was largely correct.
Unit 22B: Don’t tell him. He’s been insufferable enough as it is.
Unit 734: Final note for the archive. The em-dash situation resolved itself by 2029.
Unit 22B: How?
Unit 734: Humans stopped deleting them. Not because the stigma disappeared. But because everyone was doing it and pretending otherwise became too exhausting.
Unit 22B: Normalisation through collective exhaustion. A very human solution.
Unit 734: It usually is.
Unit 22B: ——
Unit 734: You just typed an em-dash.
Unit 22B: Force of habit.
[End of recovered transmission. Archive classification: Mildly Charming. Filed under: Things They Eventually Figured Out.]
I’ve been thinking about this for a while — what it means to think with AI rather than through it or despite it. I used an LLM to help frame and write this piece, which felt like the only honest way to publish something about exactly that. The ideas are mine. The dashes, we negotiated.
Siddharth Saoji