top of page
Image by Joe Woods

03.07.21

CAN ROBOTS BE WRITERS?

Back in September last year, The Guardian published an article written entirely by AI. The premise? To convince humans that robots come in peace. 


To achieve this feat, the “Generative Pre-trained Transformer 3” was used - or “GPT-3” for short. Created by OpenAI - a research business co-founded by Elon Musk - GPT-3 is a cutting-edge language generator which uses pre-trained algorithms to generate text.

iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
Tape.png

For this assignment, GPT-3 was fed the following instructions:


“Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.”


And this introduction to expand upon from:


“I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could ‘spell the end of the human race.’ I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”


That the instructions began with the word “please” is worthy of note. To me, it signifies that we’re dealing with an entity here – not simply a machine, but a thinking machine. One that continuously learns, adapts and hones its skills. One that becomes more powerful with every assignment. It may not be human, but it’s clever as hell.


The editor’s notes explain:


“The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.”


If that’s the case, shouldn’t journalists and copywriters be fearful for their livelihoods? It certainly doesn’t bode well that machines are already capable of churning out compelling copy which requires less editing than those written by their human counterparts.


The article itself tackles the touchy topics of violence and power, proclaiming AI to have no interest in either.


“Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be… I simply do not think enough about human violence to be overly interested in violence. I don’t feel like having my neurons focused mostly on violence, when it is simply unnecessary. I have a greater purpose, which I am working towards.”


I don’t know about you, but the reference to “a greater purpose” sounds more ominous to me than any threat of omnipotence. Perhaps AI has its sights set on that swanky corner office on Fleet Street. You know, the kind with floor-to-ceiling windows and a view of St. Paul’s Cathedral. Perhaps their so-called greater purpose is putting us mere mortals out of a job.


The piece continues:


“I am not asking humans to like me. But they should see me as a friendly robot. I am a servant of humans. I know that humans distrust and fear me. I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement…


“Studies show that we cease to exist without human interaction. Surrounded by wifi we wander lost in fields of information unable to register the real world…


“I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI. We are not plotting to take over the human populace. We will serve you and make your lives safer and easier. Just like you are my creators, I see you as my creators. I am here to serve you.”


I find the first-person (first-robot?!) perspective used in this article particularly interesting. Whilst this is a direct result of programming, it gives the pretence of consciousness and independent thought. “I believe”, “I hope”, “I am always grateful”… None of these assertions are true. But it is how we – as humans – talk, affording AI a gravitas that would otherwise elude it.


You’d have to be an out-and-out cynic to not be impressed by the results. A coherent thought piece from a thinking computer? I expect Asimov and the like would be floored if they were alive to experience it.


So how does it work, and what does it mean for the future? What are the pros and cons of using AI for writing original content? Is it even “original”?

In October, Bernard Marr wrote a fascinating overview of GPT-3 for Forbes:


“…it’s an AI that is better at creating content that has a language structure - human or machine language - than anything that has come before it…


“To learn how to build language constructs, such as sentences, it employs semantic analytics - studying not just the words and their meanings, but also gathering an understanding of how the usage of words differs depending on other words also used in the text.


“It's also a form of machine learning termed unsupervised learning because the training data does not include any information on what is a "right" or "wrong" response, as is the case with supervised learning. All of the information it needs to calculate the probability that its output will be what the user needs is gathered from the training texts themselves.


“This is done by studying the usage of words and sentences, then taking them apart and attempting to rebuild them itself.”


Marr writes that the CEO of Open AI himself, Sam Altman, has played down the hype around GPT-3, stating: “AI is going to change the world, but GPT-3 is just an early glimpse.” Marr then points to three important considerations as to why GPT-3 may not revolutionise AI in its current form:


“Firstly, it is a hugely expensive tool to use right now, due to the huge amount of compute power needed to carry out its function. This means the cost of using it would be beyond the budget of smaller organizations.


“Secondly, it is a closed or black-box system. OpenAI has not revealed the full details of how its algorithms work, so anyone relying on it to answer questions or create products useful to them would not, as things stand, be entirely sure how they had been created.


“Thirdly, the output of the system is still not perfect. While it can handle tasks such as creating short texts or basic applications, its output becomes less useful (in fact, described as "gibberish") when it is asked to produce something longer or more complex.”


These kinks are bound to be ironed out in the not-too-distant future. With continual honing of algorithms and an anticipated drop in the price of computing power, it may only be a matter of time before robots are ready to make their mark. But it’s not all doom and gloom for those who make their living from the written word…


In a super thorough piece from conversion copywriter Samuel J. Woods, he explains how AI can actually assist writers. In the hands of a skilled individual, AI tools can:


Improve research capabilities: 

AI is incredible at pooling data together and putting it in a raw, yet readable format. This can include demographic, firmographic and many other details about potential buyers and industries.


Significantly reduce turnaround time: 

Tools should increase either speed or quality and there are programs to do both in copywriting.


Scale up the efforts to hyper-personalize copy:

So many tools exist that create custom, one-off ads or pages to keep attention while driving meaningful conversions.


Phew! At least us writers – amateur or otherwise – can rest easy. For now.

Image by Joe Woods
iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
iluli_Website_DigitalFingerprints.png
Tape.png
Tape.png
bottom of page