remove redundant adjectives
All checks were successful
check / check (push) Successful in 5s

This commit is contained in:
user
2026-03-04 14:51:33 -08:00
parent d38c1295e9
commit 6438ed22d3

View File

@@ -14,19 +14,19 @@ A negation followed by an em-dash and a reframe.
> "It's not just a tool—it's a paradigm shift." "This isn't about
> technology—it's about trust."
The single most recognizable LLM construction. Models produce this at roughly 10
to 50x the rate of human writers. Four of them in one essay and you know what
you're reading.
The most recognizable LLM construction. Models produce this at roughly 10 to 50x
the rate of human writers. Four of them in one essay and you know what you're
reading.
### Em-Dash Overuse Generally
Even outside the "not X but Y" pivot, models use em-dashes at far higher rates
than human writers. They substitute em-dashes for commas, semicolons,
parentheses, colons, and periods, often multiple times per paragraph. A human
writer might use one or two in an entire piece for a specific parenthetical
effect. Models scatter them everywhere because the em-dash can stand in for any
other punctuation mark, so they default to it. More than two or three per page
is a meaningful signal on its own.
writer might use one or two in a piece for a specific parenthetical effect.
Models scatter them everywhere because the em-dash can stand in for any other
punctuation mark, so they default to it. More than two or three per page is a
signal.
### The Colon Elaboration
@@ -56,11 +56,11 @@ at matching length creates a mechanical regularity that reads as generated.
### The Two-Clause Compound Sentence
Possibly the most pervasive structural tell, and easy to miss because each
individual instance looks like normal English. The model produces sentence after
sentence where an independent clause is followed by a comma, a conjunction
("and," "but," "which," "because"), and a second independent clause of similar
length. Every sentence becomes two balanced halves joined in the middle.
Possibly the most pervasive tell, and easy to miss because each individual
instance looks like normal English. The model produces sentence after sentence
where an independent clause is followed by a comma, a conjunction ("and," "but,"
"which," "because"), and a second independent clause of similar length. Every
sentence becomes two balanced halves joined in the middle.
> "The construction itself is perfectly normal, which is why the frequency is
> what gives it away." "They contain zero information, and the actual point
@@ -77,17 +77,17 @@ pinpoint but easy to feel.
### Uniform Sentences Per Paragraph
Model-generated paragraphs contain between three and five sentences. This count
holds steady across an entire piece. If the first paragraph has four sentences,
every subsequent paragraph will too. Human writers are much more varied (a
single sentence followed by one that runs eight or nine) because they follow the
shape of an idea, not a template.
holds steady across a piece. If the first paragraph has four sentences, every
subsequent paragraph will too. Human writers are much more varied (a single
sentence followed by one that runs eight or nine) because they follow the shape
of an idea, not a template.
### The Dramatic Fragment
Sentence fragments used as standalone paragraphs for emphasis, like "Full stop."
or "Let that sink in." on their own line. Using one in an entire essay is a
reasonable stylistic choice, but models drop them in once per section or more,
at which point it becomes a habit rather than a deliberate decision.
or "Let that sink in." on their own line. Using one in an essay is a reasonable
stylistic choice, but models drop them in once per section or more, at which
point it becomes a habit.
### The Pivot Paragraph
@@ -123,9 +123,9 @@ still says everything it needs to, the contrast was filler.
Models keep going after the sentence has already made its point, tacking on
clarifying phrases, adverbial modifiers, or restatements that add nothing.
> "A person might lean on one or two of these habits across an entire essay, but
> LLM output will use fifteen of them per paragraph, consistently, throughout
> the entire piece."
> "A person might lean on one or two of these habits across an essay, but LLM
> output will use fifteen of them per paragraph, consistently, throughout the
> entire piece."
This sentence could end at "paragraph." The words after it just repeat what "per
paragraph" already means. Models do this because they're optimizing for clarity
@@ -173,10 +173,10 @@ which is unnecessary when the writing itself already makes the importance clear.
Models rarely commit to an unqualified statement. Instead of saying a pattern
"always" or "never" does something, they write "almost always," "almost never,"
"almost certainly," "almost exclusively." The word "almost" shows up at
extraordinary density in model-generated analytical prose. It's a micro-hedge,
less obvious than the full hedge stack but just as diagnostic when it appears
ten or fifteen times in a single document.
"almost certainly," "almost exclusively." The word "almost" shows up at high
density in model-generated analytical prose. It's a micro-hedge, less obvious
than the full hedge stack but just as diagnostic when it appears ten or fifteen
times in a single document.
### "In an era of..."
@@ -493,9 +493,9 @@ roughly like this:
>
> **human:** the dramatic fragment paragraph has a tell in it
>
> **model:** "One of these in an entire essay is a stylistic choice. One per
> section is a tic." Two-clause parallel structure with the same shape, and also
> a staccato pair.
> **model:** "One of these in an essay is a stylistic choice. One per section is
> a tic." Two-clause parallel structure with the same shape, and also a staccato
> pair.
>
> **human:** add em dashes generally as a tell, and remove them all from the
> tells doc