remove unnecessary secondary clauses
All checks were successful
check / check (push) Successful in 11s

This commit is contained in:
user
2026-03-04 14:56:44 -08:00
parent 1c70757fb9
commit 73099520ca

View File

@@ -22,11 +22,9 @@ reading.
Even outside the "not X but Y" pivot, models use em-dashes at far higher rates
than human writers. They substitute em-dashes for commas, semicolons,
parentheses, colons, and periods, often multiple times per paragraph. A human
writer might use one or two in a piece for a specific parenthetical effect.
Models scatter them everywhere because the em-dash can stand in for any other
punctuation mark, so they default to it. More than two or three per page is a
signal.
parentheses, colons, and periods. A human writer might use one or two in a
piece. Models scatter them everywhere because the em-dash can stand in for any
other punctuation mark. More than two or three per page is a signal.
### The Colon Elaboration
@@ -52,7 +50,7 @@ bother maintaining.
Runs of very short sentences at the same cadence. Human writers use a short
sentence for emphasis occasionally, but stacking three or four of them in a row
at matching length creates a mechanical regularity that reads as generated.
at matching length creates a mechanical regularity.
### The Two-Clause Compound Sentence
@@ -60,7 +58,7 @@ Possibly the most pervasive tell, and easy to miss because each individual
instance looks like normal English. The model produces sentence after sentence
where an independent clause is followed by a comma, a conjunction ("and," "but,"
"which," "because"), and a second independent clause of similar length. Every
sentence becomes two balanced halves joined in the middle.
sentence becomes two balanced halves.
> "The construction itself is perfectly normal, which is why the frequency is
> what gives it away." "They contain zero information, and the actual point
@@ -71,8 +69,7 @@ sentence becomes two balanced halves joined in the middle.
Human prose has sentences with one clause, sentences with three, sentences that
start with a subordinate clause before reaching the main one, sentences that
embed their complexity in the middle. When every sentence on the page has that
same two-part structure, the rhythm becomes monotonous in a way that's hard to
pinpoint but easy to feel.
same two-part structure, the rhythm becomes monotonous.
### Uniform Sentences Per Paragraph
@@ -80,14 +77,13 @@ Model-generated paragraphs contain between three and five sentences. This count
holds steady across a piece. If the first paragraph has four sentences, every
subsequent paragraph will too. Human writers are much more varied (a single
sentence followed by one that runs eight or nine) because they follow the shape
of an idea, not a template.
of an idea.
### The Dramatic Fragment
Sentence fragments used as standalone paragraphs for emphasis, like "Full stop."
or "Let that sink in." on their own line. Using one in an essay is a reasonable
stylistic choice, but models drop them in once per section or more, at which
point it becomes a habit.
stylistic choice, but models drop them in once per section or more.
### The Pivot Paragraph
@@ -102,14 +98,12 @@ Delete every one of these and the piece reads better.
> "This is, of course, a simplification." "There are, to be fair, exceptions."
Parenthetical asides inserted to look thoughtful. The qualifier never changes
the argument that follows it. Its purpose is to perform nuance, not to express a
real reservation about what's being said.
the argument that follows it. Its purpose is to perform nuance.
### The Unnecessary Contrast
Models append a contrasting clause to statements that don't need one, tacking on
"whereas," "as opposed to," "unlike," or "except that" to draw a comparison the
reader could already infer.
"whereas," "as opposed to," "unlike," or "except that."
> "Models write one register above where a human would, whereas human writers
> tend to match register to context."
@@ -120,8 +114,7 @@ still says everything it needs to, the contrast was filler.
### Unnecessary Elaboration
Models keep going after the sentence has already made its point, tacking on
clarifying phrases, adverbial modifiers, or restatements that add nothing.
Models keep going after the sentence has already made its point.
> "A person might lean on one or two of these habits across an entire essay, but
> LLM output will use fifteen of them per paragraph, consistently, throughout
@@ -129,9 +122,9 @@ clarifying phrases, adverbial modifiers, or restatements that add nothing.
This sentence could end at "paragraph." The words after it just repeat what "per
paragraph" already means. Models do this because they're optimizing for clarity
at the expense of concision, and because their training rewards thoroughness.
The result is prose that feels padded. If you can cut the last third of a
sentence without losing any meaning, the last third shouldn't be there.
at the expense of concision. The result is prose that feels padded. If you can
cut the last third of a sentence without losing any meaning, the last third
shouldn't be there.
### The Question-Then-Answer
@@ -167,16 +160,15 @@ becomes "craft." The tendency holds regardless of topic or audience.
"Importantly," "essentially," "fundamentally," "ultimately," "inherently,"
"particularly," "increasingly." Dropped in to signal that something matters,
which is unnecessary when the writing itself already makes the importance clear.
which is unnecessary when the writing itself makes the importance clear.
### The "Almost" Hedge
Models rarely commit to an unqualified statement. Instead of saying a pattern
"always" or "never" does something, they write "almost always," "almost never,"
"almost certainly," "almost exclusively." The word "almost" shows up at high
density in model-generated analytical prose. It's a micro-hedge, less obvious
than the full hedge stack but just as diagnostic when it appears ten or fifteen
times in a single document.
density in model-generated analytical prose. It's a micro-hedge, diagnostic in
volume.
### "In an era of..."
@@ -184,7 +176,7 @@ times in a single document.
A model habit as an essay opener. The model uses it to stall while it figures
out what the actual argument is. Human writers don't begin a piece by zooming
out to the civilizational scale before they've said anything specific.
out to the civilizational scale.
---
@@ -196,7 +188,7 @@ out to the civilizational scale before they've said anything specific.
Every argument followed by a concession, every criticism softened. A direct
artifact of RLHF training, which penalizes strong stances. Models reflexively
both-sides everything even when a clear position would serve the reader better.
both-sides everything.
### The Throat-Clearing Opener
@@ -204,8 +196,7 @@ both-sides everything even when a clear position would serve the reader better.
> has never been more important."
The first paragraph of most model-generated essays adds no information. Delete
it and the piece improves immediately. The actual argument starts in paragraph
two.
it and the piece improves.
### The False Conclusion
@@ -241,7 +232,7 @@ vague than risk being wrong about anything.
> "This can be a deeply challenging experience." "Your feelings are valid."
Generic emotional language that could apply equally to a bad day at work or a
natural disaster. That interchangeability is what makes it identifiable.
natural disaster.
---
@@ -251,21 +242,20 @@ natural disaster. That interchangeability is what makes it identifiable.
If the first section of a model-generated essay runs about 150 words, every
subsequent section will fall between 130 and 170. Human writing is much more
uneven, with 50 words in one section and 400 in the next.
uneven.
### The Five-Paragraph Prison
Model essays follow a rigid introduction-body-conclusion arc even when nobody
asked for one. The introduction previews the argument, the body presents 3 to 5
points, and then the conclusion restates the thesis using slightly different
words.
points, and then the conclusion restates the thesis.
### Connector Addiction
Look at the first word of each paragraph in model output. You'll find an
unbroken chain of transition words: "However," "Furthermore," "Moreover,"
"Additionally," "That said," "To that end," "With that in mind," "Building on
this." Human prose moves between ideas without announcing every transition.
this." Human prose doesn't do this.
### Absence of Mess
@@ -276,8 +266,7 @@ a thought genuinely unfinished, or keep a sentence the writer liked the sound of
even though it doesn't quite work.
Human writing does all of those things regularly. That total absence of rough
patches and false starts is one of the strongest signals that text was
machine-generated.
patches and false starts is one of the strongest signals.
---
@@ -289,7 +278,6 @@ machine-generated.
Zooming out to claim broader significance without substantiating it. The model
has learned that essays are supposed to gesture at big ideas, so it gestures.
Nothing concrete is behind the gesture.
### "It's important to note that..."
@@ -302,8 +290,7 @@ verbal tics before a qualification the model believes someone expects.
Models rely on a small, predictable set of metaphors ("double-edged sword," "tip
of the iceberg," "north star," "building blocks," "elephant in the room,"
"perfect storm," "game-changer") and reach for them with unusual regularity
across every topic. The pool is noticeably smaller than what human writers draw
from.
across every topic.
---
@@ -314,10 +301,9 @@ Humans write "crucial." Humans ask rhetorical questions.
What gives it away is how many of these show up at once. Model output will hit
10 to 20 of these patterns per page. Human writing might trigger 2 or 3,
distributed unevenly, mixed with idiosyncratic constructions no model would
produce. When every paragraph on the page reads like it came from the same
careful, balanced, slightly formal, structurally predictable process, it was
generated by one.
distributed unevenly. When every paragraph on the page reads like it came from
the same careful, balanced, slightly formal, structurally predictable process,
it was generated by one.
---