AI and Big Data

Who Wrote This? Can ChatGPT Rewrite A Better Article?

I Asked GenAI to Rewrite My Article, Here's What Happened

How does GenAI compare to human-created content? In this article, I explore the strengths and weaknesses of GenAI content by comparing my own article to a rewritten version provided to me by ChatGPT. While GenAI is a powerful tool for organizing and presenting information, it is not yet suitable for fostering deep engagement and critical reflection. The production of concise, quick content lacks the human thought process and examination of information that invites readers into critically engaging with material. As AI continues to evolve, it is important we evaluate the merit of speed and simplicity over the nuanced process of human experience and thought.

Who Wrote This? Can ChatGPT Rewrite A Better Article?

How does GenAI compare to human-created content?—Image generated with ChatGPT
How does GenAI compare to human-created content?—Image generated with ChatGPT

Is GenAI a formidable challenge for journalists? Can you spot the difference in an article written by a human vs. a computer? It’s a troubling thought to consider. But in the face of GenAI becoming a mainstream tool, those questions have become increasingly more valid. As someone learning and navigating this new age of AI content, I decided to test GenAI in an experimental comparison.

On this blog, we provide 100% human-written content, the use of ChatGPT in this article is for experimental purposes, to evaluate GenAI’s ability to rewrite an original text it has been given.

The Experiment—GenAI Rewrite

I asked ChatGPT to rewrite one of my own pieces. I wanted to see just how well an AI language model could interpret, mimic and recreate the human voice. My original piece explored the adoption of AI search engines over traditional ones through the consideration of various opinions. 

I tasked ChatGPT with rewriting the article under these guidelines: No text from the original piece can be present. The article must be structured in the tone of voice of a blog post and must be written as ChatGPT thinks it should have been originally. Additionally, I gave ChatGPT the context of its own position and skill level within the requested task. I set these guidelines with the intention of directing ChatGPT to the result I was looking for, its best attempt at rewriting my article as passable human content. 

I’ll admit the result was impressive, so much so, it was slightly off-putting. But additionally, it felt as if something was missing. In this post, I will compare the result given to me by ChatGPT and my original article: what worked, what didn’t and what AI’s interpretation says about the evolving nature of GenAI content.

Human-Created Content 

My reflection on the adoption of AI search engines over traditional ones expanded on the opinions and perspectives of tech journalists Kevin Roose, Matteo Wong and Joanna Stern. Each brought different levels of adoption and a variety of pros and cons.

Human writing shares personal experience and thought—Image generated with ChatGPT

I aimed to deepen the conversation surrounding their thoughts, pointing out contradictions, raising new questions and reflecting on my own use and thoughts of AI search engines. I quoted poignant aspects I felt captured both sides of the argument, leaving the reader open to form their own opinion on whether the state of adoption was something of concern or curiosity. In an effort to open the conversation, I did my best to find a balance of benefits and drawbacks, as well as discussing how these points interacted, keeping a casual and thoughtful tone.

If you’re interested in reading the original article, it is linked here

ChatGPT Results 

So, what did ChatGPT give me? In 15 seconds or less, ChatGPT presented me with a polished, structured blog post examining Kevin Roose, Matteo Wong and Joanna Stern’s thoughts on the adoption of AI search engines. It opened with a hook, declaring, “AI search engines like ChatGPT, Perplexity, and others are beginning to challenge the status quo.” After, it moved quickly along to the summary of each journalist’s perspective. 

Following the summaries, there was a “My Take” portion where it rewrote my personal experience with the AI tools. Afterward, ending with an eerily similar but condensed version of my conclusion, including some figures of adoption from the original article.

What Was There and What Was Missing 

How does GenAI compare to human-created content? The writing was clear, concise and easily digestible—ideal for a blog post. But as I read through this generated version, immediately it was noticeably abbreviated in terms of information and thought. The rewrite was too neat. Yes, it covered the overall points made in each article, but it lacked the deeper engagement with the text. There were no moments of reflection, quotations of the source material or further exploration. Each line was presented as a simple fact. 

GenAI doesn’t like to admit it does not know— Image generated by ChatGPT

One of the key aspects ChatGPT could not seem to grasp was the ambiguity of the situation. In every way I tried to push critical examination of the situation, ChatGPT went the opposite direction, seemingly providing the answer in an almost complete way. This avoidance of uncertainty left no space for the reader to wonder—not a second to consider a question or contemplate the situation.

Comparison of Matteo Wong

Below, I have included two excerpts, both coming from the introduction of the reflection on Matteo Wong’s opinion. The first is pulled from my original article, followed by the rewrite provided to me by ChatGPT. 

“Roose is far from the only one to have concerns about the growing popularity of AI. The Atlantic’s Matteo Wong placed a heavy critique on AI in his article “The Death of Search.” Wong’s piece focused less on the way in which a person uses AI and more on the way that AI changes our relationship with information. In his view, the concern is not AI’s credibility or factuality—those issues could be fixed as systems evolve—but the loss of an exploratory model of search. When people stop engaging critically with information, they lose the ability to evaluate and explore their own curiosity.” —Clover Meyer 

“In a more critical tone, The Atlantic’s Matteo Wong explored what AI search might cost us beyond convenience. Wong’s concern? That AI could erode our natural curiosity. Traditional search encourages exploration. We open tabs, compare viewpoints, and sometimes get pleasantly lost in the process. AI, on the other hand, delivers a single “answer” that can feel too definitive, too polished.”—ChatGPT

How to Tell the Difference

It’s impressive, right? I would say the key points are there, but the space to consider what you have just read is missing. So, how does GenAI compare to human-created content? The difference between human writing and GenAI content is the undeniable aspect of personal experience and thought. The element of curiosity and self-doubt cannot be manufactured by a machine. ChatGPT’s rewrite was certainly something; I’d even say it was competent, well-structured and informative. But it failed to include the depth of reflection and consideration. It didn’t invite the reader into the larger conversation that was happening, it simply delivered the facts, neatly packaged.

What I Took Away 

To me, this is the key takeaway. While GenAI is an excellent tool for organizing and presenting information, it still lacks the capacity for true engagement. It can’t think through material the way a human can. 

While GenAI is an excellent tool for organizing and presenting information, it still lacks the capacity for true engagement. It can’t think through material the way a human can.

As GenAI continues to evolve, we’ll need to assess the value we place on the examination and scrutiny of information. What do we want from the content we consume? Do we prefer fast, simplified answers that condense a line of thought into a sentence, or do we still value the messy, unpredictable, and human process of exploration?

Clover Meyer

Hello! My name is Clover Meyer, and I am a Public Relations major with a minor in Art at the University of Oregon. I’m currently studying aboard at the Institut Catholique de Paris, where I am exploring my interests in international communications, art history and marketing. Recently, I joined the Visionary Marketing team as an intern. I am looking forward to contributing and learning from this creative and inspiring community!

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button