In The Beginning There Was Slop

 I’ve been slowly reading my copy of “The Internet Phone Book” and I recently read an essay in it by Elan Ullendorff called “The New Turing Test”.

Elan argues that what matters in a work isn’t the tools used to make it, but the “expressiveness” of the work itself (was it made “from someone, for someone, in a particular context”):

If something feels robotic or generic, it is those very qualities that make the work problematic, not the tools used.

This point reminded me that there was slop before AI came on the scene.

A lot of blogging was considered a primal form of slop when the internet first appeared: content of inferior substance, generated in quantities much vaster than heretofore considered possible.

And the truth is, perhaps a lot of the content in blogosphere was “slop”.

But it wasn’t slop because of the tools that made it — like Movable Type or Wordpress or Blogger.

It was slop because it lacked thought, care, and intention — the “expressiveness” Elan argues for.

You don’t need AI to produce slop because slop isn’t made by AI. It’s made by humans — AI is just the popular tool of choice for making it right now.

Slop existed long before LLMs came onto the scene.

It will doubtless exist long after too.


Reply via: Email · Mastodon · Bluesky

Jim Nielsen's Blog

11 Jan 2026 at 19:00

The AI Security Shakedown

 Matthias Ott shared a link to a post from Anthropic titled “Disrupting the first reported AI-orchestrated cyber espionage campaign”, which I read because I’m interested in the messy intersection of AI and security.

I gotta say: I don’t know if I’ve ever read anything quite like this article.

At first, the article felt like a responsible disclosure — “Hey, we’re reaching an inflection point where AI models are being used effectively for security exploits. Look at this one.”

But then I read further and found statements like this:

[In the attack] Claude didn’t always work perfectly. It occasionally hallucinated […] This remains an obstacle to fully autonomous cyberattacks.

Wait, so is that a feature or a bug? Is it a good thing that your tool hallucinated and proved a stumbling block? Or is this bug you hope to fix?

The more I read, the more difficult it became to discern whether this security incident was a helpful warning or a feature sell.

With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator. Less experienced and resourced groups can now potentially perform large-scale attacks of this nature.

Shoot, this sounds like a product pitch! Don’t have the experience or resources to keep up with your competitors who are cyberattacking? We’ve got a tool for you!

Wait, so if you’re creating something that can cause so much havoc, why are you still making it? Oh good, they address this exact question:

This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense.

Ok, so the article is a product pitch:

  • We’ve reached a tipping point in security.
  • Look at this recent case where our AI was exploited to do malicious things with little human intervention.
  • No doubt this same thing will happen again.
  • You better go get our AI to protect yourself.

But that’s my words. Here’s theirs:

A fundamental change has occurred in cybersecurity. We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response. We also advise developers to continue to invest in safeguards across their AI platforms, to prevent adversarial misuse. The techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.

It appears AI is simultaneously the problem and the solution.

It’s a great business to be in, if you think about it. You sell a tool for security exploits and you sell the self-same tool for protection against said exploits. Everybody wins!

I can’t help but read this post and think of a mafia shakedown. You know, where the mafia implies threats to get people to pay for their protection — a service they created the need for in the first place. ”Nice system you got there, would be a shame if anyone hacked into it using AI. Better get some AI to protect yourself.”

I find it funny that the URL slug for the article is:

/disrupting-AI-espionage

That’s a missed opportunity. They could’ve named it:

/causing-and-disrupting-AI-espionage


Reply via: Email · Mastodon · Bluesky

Jim Nielsen's Blog

07 Jan 2026 at 19:00



Refresh complete

ReloadX
Home
(102) All feeds

Last 24 hours
Download OPML
Annie
*
Articles – Dan Q
*
Baty.net posts
bgfay
*
Bix Dot Blog
Brandon's Journal
Chris McLeod's blog
Colin Devroe
*
Colin Walker – Daily Feed
Content on Kwon.nyc
Crazy Stupid Tech
*
daverupert.com
Dealgorithmed
*
Human Stuff from Lisa Olivera
*
jabel
*
James Van Dyne
*
Jim Nielsen's Blog
Jo's Blog
Kev Quirk
*
Manton Reece
*
Manu's Feed
*
Notes – Dan Q
*
On my Om
*
QC RSS
rebecca toh's untitled project
*
Rhoneisms
*
Robert Birming
*
Scripting News for email
Simon Collison | Articles & Stream
*
strandlines
*
The Torment Nexus
*
thejaymo
*
Westenberg.

About Reader


Reader is a public/private RSS & Atom feed reader.


The page is publicly available but all admin and post actions are gated behind login checks. Anyone is welcome to come and have a look at what feeds are listed — the posts visible will be everything within the last week and be unaffected by my read/unread status.


Reader currently updates every six hours.


Close

Search




x
Colin Walker Colin Walker colin@colinwalker.blog