I don’t like the idea of joining my voice to the uproar of conflicting, often half-baked opinions of another million writers coming to grips with the emergence of ChatGPT (partly because, try as I might, my thoughts would probably be the most unbaked ones of them all).

But a few things happened in my interactions with humans and AI over the past few months which threw light into my tortured thinking, and led me to a somewhat tentative conclusion and argument for why and how AI may be a boon to humanity especially when it comes to art, creation, and the nature and value of human connection in a way that might not be immediately obvious.

(I won’t touch on the moral, or political, or spiritual implications of ChatGPT, other than to say that the answers it gives you on who to marry, whether God exists, and why you should believe in Christ are pretty wacky. It’s not a good place to go for those kinds of questions.)

The working thesis of this informal, highly-opinionated blog post: By making generic creation instantaneous and free, AI creators force us to step up our game as human beings, putting a premium on ideas, emotions, spiritual connection, and in-person collaboration.

This is a rather optimistic viewpoint, I’d admit.

I’d grant there may not be many who would step up, or who want to stretch themselves beyond simply improving the prompts they submit to the new info-god (what a thoughtful article in The New Yorker describes as a “blurry JPEG of the Internet”).

Nice AI pictures (or not) aside, growth and personal skills development have been a core tenet of my approach to life before ChatGPT made it crucial for mere survival, so I don’t see that changing anytime soon.

But still, the new landscape did inspire concern and multiple existential crises for me.

A few quick notes on what I see happening:

  • “General knowledge” writing, especially ones devoid of personality and personal experience or story, is at risk. Before ChatGPT, this level and “niche” of freelance writing was the cesspool of inexperienced(or lazy) freelance writers, many of whom could make a living churning out article after article by rewording a summary of Google search results for particular keywords. (I’ve been there, so I know.)
  • Now with AI doing the same thing better at next-to-zero time or financial cost, the onus is on the writers to either acquire tacit knowledge of their field and communicate that clearly and creatively to others, or (to put it bluntly) go find another low-level job in another field that robots would not be dominating in the next three years.

Here’s a short piece I wrote elsewhere last November:

“It was a dark day when GPT-3 wrote fiction the same year I completed my first novel.

It seems even darker now, with the AI-powered music composition program AIVA winning national awards, ChatGPT creating entire screenplays in seconds, and DeepMind’s AlphaCode AI writing code at a competitive level.

As a freelance writer, this can be confusing. Some freelance clients encourage the use of programs such as Ryter, Jasper, and HyperWrite in the name of productivity. Those programs can do research, optimize copy for search engines, and edit the text quicker than any human being. Other clients fire writers who dare use AI as a stepping stone.

The issue, as I see it, isn’t whether or not it’s cheating.

It’s whether it would make any difference at all.

I’d be honest with you. I haven’t personally thought this through as deeply as I’d like to yet.

Besides, the conclusions I reach would always be a step behind the next to-be-released shiny tech thing. (And please don’t mention Elon Musk’s brain chip just yet…)

But as far as writing, researching, and reading goes, the internet may soon become an information wasteland of content written by bots, scanned by bots, ranked by bots, created for…bots? (There may be ways to work around this, which I’m looking into.)

But if AI could code, write, make music, create images of people who’ve never been born, and who knows what else, there’s no telling where I–or humanity as a whole–would end up in the next few years.

There’s one thing AI could never be or do though, and that is to be human.

What that means is somewhat less straightforward.

But I know it means at least this—don’t be a bot.”

The argument is underdeveloped and over-simplified here, to be sure, but the premise still stands.

I’d add, though, that it does make a difference in the originality of thought behind a piece.

When you create a prompt, press enter, and are rewarded with a grammatically correct mini-article that’s exactly what you need in terms of information, what you’re faced with is a collection of information bits put together from other content. It’s simply noise, recycling thoughts without additional insight, which is arguably meaningless since those thoughts and content is already in existence and is readily available to whoever will seek after it.

To put it succinctly: As a creator, ChatGPT is the best generalist out there, because it has the entire library of the internet at its virtually instantaneous command. Human research and information summarization simply cannot beat its speed and breadth of “knowledge”.

It’s like how, in a certain GPT-3-based writing platform I use from time to time, you can type something, get stuck mentally (or just lazy), press “+++”, and have the AI behind the screen spit out three or four sentences that continue your train of thought with the same depth and tone of your actual writing. One honestly cannot tell the difference between what the human wrote and what was prompted by the add signs.

It’s a pretty picture of words crafted on the topic you’ve set before GPT-3, dug and repurposed from the Web into sensible, souless blocks of text.

To add to that, one cannot discover plagarism or “use of AI writing tools”, particularly if you switch up a few words here and there. I have yet to find a tool that can reliably sniff out AI-written content and distinguish it from human-written ones.

This begs the question: Does it matter how the end product was created, so long as it meets a certain standard in quality and accomplishes its purposes?

The issue, as I see it, is that this sort of content has actually been useful and is being used, mostly by general knowledge websites hoping to rank on Google for “how to”- and “what is”-type of articles. Rephrased content help drive traffic (and consequently, sales); and the more relevant content you put out there, the more probable users exploring your niche would come across your site.

In essence, when content becomes a numbers game in terms of metrics. SEO ranking, and so forth, the originality of thought becomes less important. What counts are eyeballs and clicks, not whether you’ve helped another human being understand more of life or themselves or the world, or whether you’ve made them think or feel (other than “I want this product/service”, that is).

The content becomes less valuable to the reader, because it’s no longer rare and unique. The writing process ceases to become valuable to the writer, because it no longer requires them to wrestle with the thoughts behind the words in their own minds.

(…And the thoughtful blog post slowly degenerates into a rant. Switching gears now.)

The point of contention I’ve now reached with myself is: Do I continue to work with AI tools to create content faster and better for content’s (and income’s) sake, or do I take the much slower, much-lower-paid (for now!) route of actually writing and crafting words that came from my own mind?

To think that I might live to see the day when man has built digital technology that drives some of them back into analog existence!

Wherever AI would take us in the future, the one thing it would never be is to be truly and fully human.

That is what, I think, all of us should work towards understanding and being.

And like the imperfect human being I am, I fail to live up to my own preaching:

A friend recently changed his picture on a messaging platform. It was an improvement from his previous one, so I thought I’d say something.

I sent him: “Nice picture”.

Pause here for a moment. This is just about as generic, uncreative, and robotic a message as I could possibly have written. I could have made a comment in a million other ways (or not at all) that could have meant more, with just a little more thought and creative fun; and yet I pressed send on this one.

Therefore I take full responsibility for what follows.

His response: “Thank you! Nice picture to you as well”.

I’d admit, I laughed outright. I’m not sure if he meant it; I’d like to think so; but this further drives the point home:

While that might not be the usual response, it’s the perfect example of how the “acknowledge and repeat with ‘you as well’” algorithm usually runs. You say thank you, and return the compliment. (Regardless of data input, apparently.)

After this exchange, I promised myself never to do anything of which it could be said, “That’s AI-produced.”

I’m not sure how good of an approach that is, to be honest, or for how long it’s going to be a viable option.

At any rate, rest assured that this piece wasn’t ChatGPT-prompted.

Leave a Reply

Your email address will not be published. Required fields are marked *