Skip to content

How AI will kill AI

Published: at 10:56 AM

The internet right now is filled with bots thanks to the advances in LLM. The internet before 2022, umm, was not so great either. But there’s a significant difference between these 2 eras.

Clickbaits

Internet always had clickbaits. Heck, the title for this blog is borderline clickbait. I’m not against clickbait, but I hate it when it teaches me nothing. Before ChatGPT, the clickbait would be identified immediately when you recognize the meaningless words alongside your specific search term. Now? It’s `AI-generated’ articles. And the worst part is that these are EVERYWHERE! From personal blog to Twitter comments, it feels like there’s nothing worth reading these days. The reading fatigue from all of this could either lead to two things:

The death of long-form writing

Why bother with writing blogs and notes if AI can generate things at a cheaper rate? And why bother writing if it gets you less clicks? Now that Google is going after taking over the online ad industry with `Privacy Sandbox’, the hope for making money with ads on website is near zero. But with AI, things take a devious turn.

AI-generated clickbait

And with LLMs, people can hijack clicks to promote their own products, or in some case, just to promote nothing and be a nuisance. Yesterday, I was searching for the system prompt of `Claude-3’ (I can’t ask ChatGPT that yet), and I came across this article that felt interesting, and I clicked on it. It said:

Anthropic has released the system prompt for its latest language learning model (LLM), Claude 3. Notably, a single line in this prompt could cause the chatbot to simulate self-awareness far more convincingly than other models do.

Okay, and then? The `line’ was never mentioned in the article later. The rest of the article felt like if someone asked ChatGPT to summarize a tweet and article. And it was. It was based on this tweet:

Since the `line’ was in an image, the article couldn’t show it. But the article just made one up. And to top it off, going through that website showed multiple articles with AI-generated images, and it’s just infuriating.

The future

If the amount of new data or the quality of new data declines, LLMs built on this would also perform worse, and we could be looking at the enshittification of LLMs. You could claim that artificial data could solve this issue, but the major factor why LLMs became popular was due to its unique personality it acquires training over the internet, mimicking Reddit and 4chan on-demand. Ultimately, AI companies need to find a solution to either identify AI generated articles, or to no longer depend on the user-generated content.