8 Commits

Author SHA1 Message Date
user
73099520ca remove unnecessary secondary clauses
All checks were successful
check / check (push) Successful in 11s
2026-03-04 14:56:44 -08:00
user
1c70757fb9 restore example quotes
All checks were successful
check / check (push) Successful in 5s
2026-03-04 14:52:23 -08:00
user
6438ed22d3 remove redundant adjectives
All checks were successful
check / check (push) Successful in 5s
2026-03-04 14:51:33 -08:00
user
d38c1295e9 expand checklist item 16
All checks were successful
check / check (push) Successful in 11s
2026-03-04 14:50:18 -08:00
user
6401aa482f trim first paragraph
All checks were successful
check / check (push) Successful in 11s
2026-03-04 14:45:16 -08:00
user
e45ffacd80 restructure first paragraph
All checks were successful
check / check (push) Successful in 5s
2026-03-04 14:43:16 -08:00
user
c8ad5762ab rewrite first paragraph, add unnecessary elaboration tell
All checks were successful
check / check (push) Successful in 3s
2026-03-04 14:42:15 -08:00
e0e607713e Merge pull request 'LLM prose tells: methodical checklist pass' (#9) from llm-prose-tells-checklist-pass into main
All checks were successful
check / check (push) Successful in 4s
Reviewed-on: #9
2026-03-04 23:39:14 +01:00

View File

@@ -1,9 +1,7 @@
# LLM Prose Tells
All of these show up in human writing occasionally. No single one is conclusive
on its own. The difference is concentration. A person might lean on one or two
of these habits across an entire essay, but LLM output will use fifteen of them
per paragraph.
Human writers occasionally use every pattern in this document. The reason they
work as tells is that LLM output packs fifteen of them into a paragraph.
---
@@ -16,19 +14,17 @@ A negation followed by an em-dash and a reframe.
> "It's not just a tool—it's a paradigm shift." "This isn't about
> technology—it's about trust."
The single most recognizable LLM construction. Models produce this at roughly 10
to 50x the rate of human writers. Four of them in one essay and you know what
you're reading.
The most recognizable LLM construction. Models produce this at roughly 10 to 50x
the rate of human writers. Four of them in one essay and you know what you're
reading.
### Em-Dash Overuse Generally
Even outside the "not X but Y" pivot, models use em-dashes at far higher rates
than human writers. They substitute em-dashes for commas, semicolons,
parentheses, colons, and periods, often multiple times per paragraph. A human
writer might use one or two in an entire piece for a specific parenthetical
effect. Models scatter them everywhere because the em-dash can stand in for any
other punctuation mark, so they default to it. More than two or three per page
is a meaningful signal on its own.
parentheses, colons, and periods. A human writer might use one or two in a
piece. Models scatter them everywhere because the em-dash can stand in for any
other punctuation mark. More than two or three per page is a signal.
### The Colon Elaboration
@@ -54,15 +50,15 @@ bother maintaining.
Runs of very short sentences at the same cadence. Human writers use a short
sentence for emphasis occasionally, but stacking three or four of them in a row
at matching length creates a mechanical regularity that reads as generated.
at matching length creates a mechanical regularity.
### The Two-Clause Compound Sentence
Possibly the most pervasive structural tell, and easy to miss because each
individual instance looks like normal English. The model produces sentence after
sentence where an independent clause is followed by a comma, a conjunction
("and," "but," "which," "because"), and a second independent clause of similar
length. Every sentence becomes two balanced halves joined in the middle.
Possibly the most pervasive tell, and easy to miss because each individual
instance looks like normal English. The model produces sentence after sentence
where an independent clause is followed by a comma, a conjunction ("and," "but,"
"which," "because"), and a second independent clause of similar length. Every
sentence becomes two balanced halves.
> "The construction itself is perfectly normal, which is why the frequency is
> what gives it away." "They contain zero information, and the actual point
@@ -73,23 +69,21 @@ length. Every sentence becomes two balanced halves joined in the middle.
Human prose has sentences with one clause, sentences with three, sentences that
start with a subordinate clause before reaching the main one, sentences that
embed their complexity in the middle. When every sentence on the page has that
same two-part structure, the rhythm becomes monotonous in a way that's hard to
pinpoint but easy to feel.
same two-part structure, the rhythm becomes monotonous.
### Uniform Sentences Per Paragraph
Model-generated paragraphs contain between three and five sentences. This count
holds steady across an entire piece. If the first paragraph has four sentences,
every subsequent paragraph will too. Human writers are much more varied (a
single sentence followed by one that runs eight or nine) because they follow the
shape of an idea, not a template.
holds steady across a piece. If the first paragraph has four sentences, every
subsequent paragraph will too. Human writers are much more varied (a single
sentence followed by one that runs eight or nine) because they follow the shape
of an idea.
### The Dramatic Fragment
Sentence fragments used as standalone paragraphs for emphasis, like "Full stop."
or "Let that sink in." on their own line. Using one in an entire essay is a
reasonable stylistic choice, but models drop them in once per section or more,
at which point it becomes a habit rather than a deliberate decision.
or "Let that sink in." on their own line. Using one in an essay is a reasonable
stylistic choice, but models drop them in once per section or more.
### The Pivot Paragraph
@@ -104,14 +98,12 @@ Delete every one of these and the piece reads better.
> "This is, of course, a simplification." "There are, to be fair, exceptions."
Parenthetical asides inserted to look thoughtful. The qualifier never changes
the argument that follows it. Its purpose is to perform nuance, not to express a
real reservation about what's being said.
the argument that follows it. Its purpose is to perform nuance.
### The Unnecessary Contrast
Models append a contrasting clause to statements that don't need one, tacking on
"whereas," "as opposed to," "unlike," or "except that" to draw a comparison the
reader could already infer.
"whereas," "as opposed to," "unlike," or "except that."
> "Models write one register above where a human would, whereas human writers
> tend to match register to context."
@@ -122,8 +114,7 @@ still says everything it needs to, the contrast was filler.
### Unnecessary Elaboration
Models keep going after the sentence has already made its point, tacking on
clarifying phrases, adverbial modifiers, or restatements that add nothing.
Models keep going after the sentence has already made its point.
> "A person might lean on one or two of these habits across an entire essay, but
> LLM output will use fifteen of them per paragraph, consistently, throughout
@@ -131,9 +122,9 @@ clarifying phrases, adverbial modifiers, or restatements that add nothing.
This sentence could end at "paragraph." The words after it just repeat what "per
paragraph" already means. Models do this because they're optimizing for clarity
at the expense of concision, and because their training rewards thoroughness.
The result is prose that feels padded. If you can cut the last third of a
sentence without losing any meaning, the last third shouldn't be there.
at the expense of concision. The result is prose that feels padded. If you can
cut the last third of a sentence without losing any meaning, the last third
shouldn't be there.
### The Question-Then-Answer
@@ -169,16 +160,15 @@ becomes "craft." The tendency holds regardless of topic or audience.
"Importantly," "essentially," "fundamentally," "ultimately," "inherently,"
"particularly," "increasingly." Dropped in to signal that something matters,
which is unnecessary when the writing itself already makes the importance clear.
which is unnecessary when the writing itself makes the importance clear.
### The "Almost" Hedge
Models rarely commit to an unqualified statement. Instead of saying a pattern
"always" or "never" does something, they write "almost always," "almost never,"
"almost certainly," "almost exclusively." The word "almost" shows up at
extraordinary density in model-generated analytical prose. It's a micro-hedge,
less obvious than the full hedge stack but just as diagnostic when it appears
ten or fifteen times in a single document.
"almost certainly," "almost exclusively." The word "almost" shows up at high
density in model-generated analytical prose. It's a micro-hedge, diagnostic in
volume.
### "In an era of..."
@@ -186,7 +176,7 @@ ten or fifteen times in a single document.
A model habit as an essay opener. The model uses it to stall while it figures
out what the actual argument is. Human writers don't begin a piece by zooming
out to the civilizational scale before they've said anything specific.
out to the civilizational scale.
---
@@ -198,7 +188,7 @@ out to the civilizational scale before they've said anything specific.
Every argument followed by a concession, every criticism softened. A direct
artifact of RLHF training, which penalizes strong stances. Models reflexively
both-sides everything even when a clear position would serve the reader better.
both-sides everything.
### The Throat-Clearing Opener
@@ -206,8 +196,7 @@ both-sides everything even when a clear position would serve the reader better.
> has never been more important."
The first paragraph of most model-generated essays adds no information. Delete
it and the piece improves immediately. The actual argument starts in paragraph
two.
it and the piece improves.
### The False Conclusion
@@ -243,7 +232,7 @@ vague than risk being wrong about anything.
> "This can be a deeply challenging experience." "Your feelings are valid."
Generic emotional language that could apply equally to a bad day at work or a
natural disaster. That interchangeability is what makes it identifiable.
natural disaster.
---
@@ -253,21 +242,20 @@ natural disaster. That interchangeability is what makes it identifiable.
If the first section of a model-generated essay runs about 150 words, every
subsequent section will fall between 130 and 170. Human writing is much more
uneven, with 50 words in one section and 400 in the next.
uneven.
### The Five-Paragraph Prison
Model essays follow a rigid introduction-body-conclusion arc even when nobody
asked for one. The introduction previews the argument, the body presents 3 to 5
points, and then the conclusion restates the thesis using slightly different
words.
points, and then the conclusion restates the thesis.
### Connector Addiction
Look at the first word of each paragraph in model output. You'll find an
unbroken chain of transition words: "However," "Furthermore," "Moreover,"
"Additionally," "That said," "To that end," "With that in mind," "Building on
this." Human prose moves between ideas without announcing every transition.
this." Human prose doesn't do this.
### Absence of Mess
@@ -278,8 +266,7 @@ a thought genuinely unfinished, or keep a sentence the writer liked the sound of
even though it doesn't quite work.
Human writing does all of those things regularly. That total absence of rough
patches and false starts is one of the strongest signals that text was
machine-generated.
patches and false starts is one of the strongest signals.
---
@@ -291,7 +278,6 @@ machine-generated.
Zooming out to claim broader significance without substantiating it. The model
has learned that essays are supposed to gesture at big ideas, so it gestures.
Nothing concrete is behind the gesture.
### "It's important to note that..."
@@ -304,8 +290,7 @@ verbal tics before a qualification the model believes someone expects.
Models rely on a small, predictable set of metaphors ("double-edged sword," "tip
of the iceberg," "north star," "building blocks," "elephant in the room,"
"perfect storm," "game-changer") and reach for them with unusual regularity
across every topic. The pool is noticeably smaller than what human writers draw
from.
across every topic.
---
@@ -316,10 +301,9 @@ Humans write "crucial." Humans ask rhetorical questions.
What gives it away is how many of these show up at once. Model output will hit
10 to 20 of these patterns per page. Human writing might trigger 2 or 3,
distributed unevenly, mixed with idiosyncratic constructions no model would
produce. When every paragraph on the page reads like it came from the same
careful, balanced, slightly formal, structurally predictable process, it was
generated by one.
distributed unevenly. When every paragraph on the page reads like it came from
the same careful, balanced, slightly formal, structurally predictable process,
it was generated by one.
---
@@ -404,9 +388,13 @@ passes, because fixing one pattern often introduces another.
delete it or expand it into a complete sentence that adds actual
information.
16. Check for unnecessary elaboration at the end of sentences. Read the last
clause or phrase of each sentence and ask whether the sentence would lose
any meaning without it. If not, cut it.
16. Check for unnecessary elaboration. Read every clause, phrase, and adjective
in each sentence and ask whether the sentence loses meaning without it. This
includes trailing clauses that restate what the sentence already said,
redundant modifiers ("a single paragraph" when "a paragraph" works),
secondary clauses that add nothing ("which is why this matters"), and any
words whose removal doesn't change the meaning. If you can cut it and the
sentence still says the same thing, cut it.
17. Find every pivot paragraph ("But here's where it gets interesting." and
similar) and delete it. The paragraph after it always contains the actual