So, does Google actually penalise AI-written content?

So, does Google actually penalise AI-written content?
14
Dec 23

David Jenkin

Digital Marketing Solutions

It’s a question many in the marketing world are asking – for different reasons. Some see generative AI as a threat, others as an opportunity. But there are legitimate concerns about the proliferation of AI-generated content on the web, from harming the user experience with low-quality content to spreading misinformation and plagiarism. Publishers also worry about using AI-written copy for fear they will be penalised in search rankings. So, will they be?

Yes, Google can detect AI-written content

Search engines and many third-party tools have become quite good at identifying content that was written by a machine. They use a variety of techniques to spot the tell-tale signs of AI-generated content, particularly natural language processing (NPL) algorithms that look for syntax and structural patterns.

The average person has, by now, likely picked up on some of those patterns themselves, especially in outputs from free-to-use AI tools like ChatGPT. Articles usually begin with a bland introduction along the lines of “In the fast-paced world of {insert subject}…” with excessive verbiage or “fluff” that re-emphasises each point while following a rigid and unimaginative structure.

Detection tools are important for the likes of educators who need to grade written assignments or for publishers who pay writers for content and have found themselves flooded by AI-generated content. As large language models (LLMs) grow more sophisticated, it will become ever more difficult to distinguish their output from something written by a knowledgeable human writer without a detection tool.

But there is a place for computer-generated content, and using it on your website won’t necessarily harm your rankings.

What Google says

Even if it can detect it, the short answer is that Google does not explicitly penalise AI-generated content. That’s because it’s being careful not to stand in the way of technological progress, mindful of the many benefits that generative AI offers.

Google clarified its stance on AI-written content earlier this year, saying that it rewards high-quality content however it is produced. The search engine’s EEAT guidelines favour content that demonstrates Expertise, Experience, Authoritativeness, and Trustworthiness, and that remains as true as ever in the age of AI.

“Automation has long been used to generate helpful content,” says the February 2023 release, “such as sports scores, weather forecasts, and transcripts. AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.” The company remains focused on “maintaining a high bar for information quality and the overall helpfulness of content on Search”.

Using automation to manipulate rankings in search results is a violation of its spam policies, it adds. This is not a new practice, and spam is not a new phenomenon. What has changed is that ten years ago, low-quality spam content was written by human beings, whereas now it’s automated and can be produced at a much lower cost and on a much bigger scale. But to Google, the spam itself is the problem, not the way it was created.

The value of CRAFT

Writing for Search Engine Land, Julia McCoy says that if quality is what matters most, CRAFT is the key – that being an editing methodology that stands for:

C: Cut the fluff

R: Review, edit, optimise

A: Add images, visuals and media

F: Fact-check

T: Trust-build with tone, personal anecdotes and links

AI optimisation writers will need to focus on the CRAFT principle to create the sort of outstanding content that search engines seek to reward with higher rankings. Fact-checking is critical, as advanced LLMs still have the tendency to hallucinate (make things up,) which means the reliability of AI-generated content will remain an issue for the foreseeable future.

The challenges that remain

There are some issues that Google’s stance and even the use of AI-detection tools don’t fully satisfy. AI-generated information can repurpose the work of others without proper attribution, for instance. AI can also be used to produce very convincing misinformation.

The EU has urged tech companies like Googe and Facebook to start labelling content and images generated by AI to combat the spread of fake news and disinformation, but, for now at least, there is no obligation for any publisher or platform to make such a disclosure.

Content that gets results

In the marketing world, the aim is to create content that attracts the right audiences, convinces them and leads them on the path to conversion. That content needs to be of the highest quality if it’s to be truly effective, written for a specific audience with empathy and a relatable human tone. The answer is human-AI collaboration.

You can get the benefit of that synergy by partnering with a digital marketing agency that makes smart use of AI. For that reason, you’ll be glad you chose League Digital for your content creation and digital marketing journey. Find out more about what we do.

Start your inclusive language journey

Download our starter guide.

Brochure download
Download